You might have noticed the recent public discussions around how to securely build SPAs – and especially about the “weak security properties” of the OAuth 2.0 Implicit Flow. Brock has written up a good summary here.
The whole implicit vs code flow discussion isn’t particularly new – and my stance was always that, yes – getting rid of the tokens on the URL is nice – but the main problem isn’t how the tokens are transported to the browser, but rather how they are stored in the browser afterwards.
Actually, we had this discussion already back in May 2015 on the OAuth 2.0 email list, e.g.:
My conjecture is that it does not matter >>at all<< where you store tokens in relation to XSS. There is no secure place to store data in a browser in ways that cannot be abused by XSS. One XSS is complete compromise of the client.
And XSS resistant apps are illusive.
What about CSP?
Content Security Policy was created to mitigate XSS attacks in the browser. But to be honest, I see it rarely being used because it is hard to retro-fit into an existing application and interferes with some of the libraries that are being used. Even in brand new applications it is often an afterthought, and the longer you wait, the harder it becomes to enable it.
And btw – getting CSP right might be harder than you think – check out this video about bypassing CSP.
Image may be NSFW.
Clik here to view.
In addition, JS front-end developers typically use highly complex frameworks these days where they do not only need to know the basics of JavaScript/browser security, but also the framework specific features, quirks and vulnerabilities. Let alone the quality of known and unknown dependencies pulled into those applications.
Oh and btw – the fact that the new guidance around SPAs and Authorization Code Flow potentially allows refresh tokens in the browser, the token storage problem becomes even more interesting.
..so the conclusion turned out to be:
SPA may turn out to be impossible to completely secure.
Just like there is IETF guidance for native apps, we always thought there should be a similar document that talks about SPAs. But it seemed everyone was trying to avoid this.
Finally, the work on such a document has started now, and you can track it here.
What about Token Binding?
For me, the most promising technology to protect tokens stored in the browser was token binding (and especially the OAuth 2.0/OpenID Connect extensions for it). This would have allowed to tie the tokens to a browser/TLS connection, and even if an attack would be able to exfiltrate the tokens, it could not be used outside the context of this original connection.
Unfortunately, Google decided against implementing token binding, and if the standard is not implemented in all browsers, it’s pretty much useless.
What other options do we have?
The above mentioned IETF document mentions two alternative architecture styles that result in no access tokens at all in the browser. Let’s have a look.
Image may be NSFW.
Clik here to view.
Apps Served from the Same Domain as the API
Quoting 6.1
For simple system architectures, such as when the JavaScript application is served from the same domain as the API (resource server) being accessed, it is likely a better decision to avoid using OAuth entirely, and just use session authentication to communicate with the API.
Some notes about this:
- This is indeed a very simple scenario. Most applications I review also use APIs from other domains, or need to share their APIs between multiple clients (see next section).
- This is also not new. Especially “legacy” application often had local “API endpoints” to support the AJAX calls they sprinkled over their “multi-pages” over the years.
- When traditional session authentication (aka cookies) is used, you need to protect against CSRF. Implementing anti-forgery for APIs is extra work, and while well-understood I often found it missing when doing code reviews.
The new kid on the block: SameSite cookies
SameSite cookie are a relatively new (but standardised) feature that prohibits cross-origin usage of cookies – and thus effectively stops CSRF attacks (well at least for cookies – but that’s what we care about here). As you can see, it is implemented in all current major browsers:
Image may be NSFW.
Clik here to view.
ASP.NET Core by default sets the SameSite mode to Lax – which means that cross-origin POSTs don’t send a cookie anymore – but GETs do (which pretty much resembles that standard anti-forgery approach in MVC). You can also set the mode to Strict – which also would prohibit GETs.
This can act as a replacement for anti-forgery protection, but is relatively new. So you decide.
Bringing it together
OK – so let’s create a POC for this scenario, the building blocks are:
- ASP.NET Core on the server side for authentication and session management as well as servicing our static content
- Local or OpenID Connect authentication handled on the server-side
- Cookies with HttpOnly and Lax or Strict SameSite mode for session management (see Brock’s blog post on how to enable Strict for remote authentication)
- ASP.NET Core Web APIs as a private back-end for the SPA front-end
That’s it. This way all authentication is happening on the server, session management is done via a SameSite cookie that is not reachable from JavaScript, and since there are no tokens stored in the browser, all the client-side APIs calls don’t need to deal with tokens or token lifetime management.
The full solution can be found here.
Browser-Based App with a Backend Component
Quoting 6.2.:
To avoid the risks inherent in handling OAuth access tokens from a purely browser-based application, implementations may wish to move the authorization code exchange and handling of access and refresh tokens into a backend component.
The backend component essentially becomes a new authorization server for the code running in the browser, issuing its own tokens (e.g. a session cookie). Security of the connection between code running in the browser and this backend component is assumed to utilize browser-level protection mechanisms.
In this scenario, the backend component may be a confidential client which is issued its own client secret.
This is a much more common scenario and some people call that a BFF architecture (back-end for front-end). In this scenario, there is a dedicated back-end for the SPA that provides the necessary API endpoints. These endpoints might have a local implementation or in turn contact other APIs to get the job done. These other APIs might be shared with other front-ends and might also require a user-context – IOW the BFF must delegate the user’s identity.
The good news is, that technically this is a really easy extension to the previous scenario. Since we already used OpenID Connect to authenticate the user, we simply ask for an additional access token to be able to communicate with the shared APIs from the back-end.
The ASP.NET Core authentication session management will store the access token in an encrypted and signed cookie and all token lifetime management can be automated by plugging-in the component I described in my last blog post. This allows the BFF to use the access token to call back-end APIs on behalf of the logged-on user.
One thing I noticed is, that you often end up duplicating the back-end API endpoints in the BFF to make them available to the front-end, which is a bit tedious. If all you want is passing through the API calls from the BFF to the back-end while attaching that precious access token on the way, you might want to use a light-weight reverse proxy: enter ProxyKit.
A toolkit to create HTTP proxies hosted in ASP.NET Core as middleware. This allows focused code-first proxies that can be embedded in existing ASP.NET Core applications or deployed as a standalone server.
While ProxyKit is very powerful and has plenty of powerful features like e.g. load balancing, I use it for a very simple case: If a request comes in via a certain route (e.g. /api), proxy that request to a back-end API while attaching the current access token. Job done.
app.Map("/api", api => { api.RunProxy(async context => { var forwardContext = context.ForwardTo("http://localhost:5001"); var token = await context.GetTokenAsync("access_token"); forwardContext.UpstreamRequest.Headers.Add("Authorization", "Bearer " + token); return await forwardContext.Execute(); }); });
I think it is compelling, that combining server-side OpenID Connect, SameSite, automatic token management and ProxyKit, your SPA can focus on the actual functionality and is not cluttered with login logic, session and token management. And since no access tokens are stored in the browser itself, we mitigated at least this specific XSS problem.
Again, the full sample can be found here.
Some closing thoughts
Of course, this is not the silver-bullet. XSS can still be used to attack your front- and back-end.
Image may be NSFW.
Clik here to view.
But it is a different threat-model, and this might be easier for you to handle.
Also you are doubling the number of round-trips and you might not find this very efficient. Also keep in mind that if you are using the reverse-proxy mechanism you are not really lowering the attack surface of your back-end APIs.
But regardless if you are using OAuth 2.0 in the browser directly or the BFF approach, XSS is still the main problem.
Update
This article was quoted or questioned like “this approach is better than tokens”. This was not the point and I apologize if I wasn’t clear enough. It is an alternative approach to the “pure” OAuth 2 approach – and I am not saying it is better or worse. Both approaches have a different threat model and you might be more comfortable with one or the other.
Maybe you also need to evaluate your architecture based on the fact where the APIs live that you want to call. Would you bother with OAuth if all APIs are same-domain? Where is the tipping point? If all APIs are cross-domain then it certainly is questionable if they should proxied.
Anyways – time will tell and SameSite cookies are still very new but certainly give you a new interesting option.