September 26th, 2013

Common OAuth Security Mistakes & Threat Mitigations

OAuth SecurityI just found out we had record attendance for Wednesday’s API Tech Talk. Clearly, there’s an appetite for the topic of OAuth risk mitigation.

With our digital lives scattered across so many services, there is great value in technology that lets us control how these service providers interact on our behalf. For providers, making sure this happens in a secure way is critical. Recent hacks associated with improperly-secured OAuth implementations show that OAuth-related security risks need be taken seriously.

When in doubt, take a second look at the security considerations of the spec. There is also useful information in RFC6819 – OAuth 2.0 Treat Model & Security Considerations.

The Obvious Stuff
Let’s get a few obvious things out of the way:

  1. Use SSL (HTTPS)
  2. Shared secrets are confidential (if you can’t hide it, it doesn’t count as a secret)
  3. Sanitize all inputs
  4. Limit session lifespan
  5. Limit scope associated with sessions

None of these are specific to OAuth. They apply to just about any scheme involving sessions and secrets. For example, form login and cookie-based sessions in Web applications.

OAuth’s Main Attack Vector
Some of the grant types defined by the OAuth protocol involve the end-user being redirected from an application to a service provider’s authorization server where the user is authenticated and expresses consent for the application to call the service provider’s API on its behalf. Once this is done, the user is redirected back to the client application at a callback address provided by the client application at the beginning of the handshake. In the implicit grant type, the redirection back to the application includes the resulting access token issued by the OAuth provider.

OAuth’s main attack vector involves a malicious application pretending to be a legitimate application. When such an attacker attaches its own address as the callback for the authorization server, the user is redirected back to the malicious application instead of the legitimate one. As a result, the malicious application is now in possession of the token that was intended for a legitimate application. This attacking application can now call the API on behalf of the user and wreak havoc.

OAuth 101: Callback Address Validation
The most obvious defense against this type of attack is for the service provider to require that legitimate client applications register their callback addresses. This registration step is essential as it forms the basis of a user being able to assess which application it is granting to act on its behalf. At runtime, the OAuth authorization server compares these registered values against the callback address provided at the beginning of the handshake (redirect_uri parameter). Under no circumstance should an OAuth authorization server ever redirect a user (along with an access token) to an unregistered callback address. The enforcement of these values is a fundamental precaution that should be engrained in any OAuth implementation. Any loophole exploiting a failure to implement such a validation is simply inexcusable.

redirect_uri.startsWith(registered_value) => Not good enough!
Some application developers append client-side state at the end of runtime redirection addresses. To accommodate this, an OAuth provider may be tempted to merely validate that a runtime redirection address starts with the registered value. This is not good enough. An attacker may exploit this by adding a suffix to a redirection address – for example, to point to another domain name. Strict redirection URI trumps anything else, always. See http://tools.ietf.org/html/rfc6819#section-5.2.3.5.

Dealing with Public (Not Confidential) Clients
If you are using the authorization code grant type instead of implicit, a phishing attack yields an authorization code, not the actual access token. Although this is technically more secure, the authorization code is information that could be combined with another vulnerability to be exploited – specifically, another vulnerability caused by improperly securing a shared secret needed to complete the code handshake in the first place.

The difference between the implicit and authorization code grant types is that one deals with public clients and the other deals with confidential ones. Some may be tempted to rely on authorization code rather than implicit in order to add security to their handshakes. If you expose APIs that are meant to be consumed by public clients (such as a mobile app or a JavaScript-based invocation), forcing the application developer to use a shared secret will only lead to these shared secrets being compromised because they cannot be effectively kept confidential on a public platform. It is better to be prepared to deal with public clients and provide handshake patterns that make them secure, rather than obfuscate secrets into public apps and cross your fingers they don’t end up being reverse-engineered.

Remembering Past Consent Increases Risk
Imagine a handshake where a user is redirected to an authorization server (e.g. implicit grant). Imagine this handshake happening for the second or third time. Because the user has an existing session with the service provider, with which the authorization server is associated (via a cookie), the authentication step is not required and is skipped. Some authorization server implementations also choose to “remember” the initial expression of consent and will not prompt the user to express consent again – all in the name of better user experience. The result is that the user is immediately redirected back to the client application without interaction. This typically happens quickly and the user is not even aware that a handshake has just happened.

An “invisible” handshake of this kind may lead to improved user experience in some situations but this also increases the effectiveness of a phishing attack. If the authorization server does not choose to implement this kind of handshake and instead prompts the user to express consent again, the user is now aware that a handshake is at play. Because the user does not expect this action, this “pause” provides an opportunity for the user to question the action which led to this prompt in the first place and helps the user in recognizing that something “phishy” is in progress.

Although bypassing the authentication step provides an improvement in user experience, bypassing consent and allowing redirection handshakes without displaying anything that allows a user to abort the handshake is dangerous and the resulting UX gain is minimal (just skipping an “OK” button).

2 Comments »

  1. [...] Common OAuth Security Mistakes and Threat Mitigations (Francois Lascelles) [...]

    Pingback by Dew Drop – October 1, 2013 (#1,635) | Morning Dew — October 1, 2013 @ 5:43 am

  2. Hi Francois,

    There is an on-going effort in the OAuth community to define quality/assurance of OAuth implementation and deployments. We are supporting this effort which Dan Blum is leading.

    http://security-architect.blogspot.com/

    Dan is planning a session at IIW next week, might you or one of your colleagues attend this meeting?

    - prateek mishra

    Comment by Prateek Mishra — October 14, 2013 @ 6:55 pm

RSS feed for comments on this post. TrackBack URL

Leave a comment