Opaque handles everywhere (okay, that simplifies stuff going over the wire). Union types in protocol payloads - the spec calls these "polymorphic JSON", but the reality is you will need to branch on type of a given field. Worse, nothing prevents having two or more subtly different dictionaries in the same field, based on arbitrary/implicit conditions.
Subtle and surprising payload differences are pretty much guaranteed to introduce weird problems in the real world. And I'm not ruling out security problems either, because a bug in authorisation logic can easily generate tokens that are valid for wrong scopes.
EDIT: There's this [1], but it only makes me ask more questions. The only rationale I can see from that document is “it would seem silly and wasteful to force all developers to push all of their keying information in every request”. Which makes me want to throw out oauth.xyz and never look at it again, because that looks like the authors have some absurd priorities in their protocol design.
[1] - https://medium.com/@justinsecurity/xyz-handles-passing-by-re...
OAuth transactions are "big" because they allow the use of RSA keys, which are large. The keys would be smaller if they were simply opinionated and mandated a specific cipher, such as Ed25519 that uses much smaller keys.
Protocols like SAML, OpenID, and OAuth aren't. They're not protocols at all. They're protocol parts thrown into a bag that everyone can pull whatever they like out of. They support way too many cryptographic primitives, and have far too many pluggable features.
Just yesterday I had to deal with an outage because a large enterprise ISV's SAML implementation doesn't support signing key rollover! You can have exactly one key only, which means after a key renewal all authentication breaks. You have to do big-bang coordinated rollouts.
That is typical of the kind of stuff I see in this space.
Everyone gets every part of these protocols wrong. Google does SAML wrong! Microsoft fixed one glaring security flaw in Azure AD, but neglected to back-port it to AD FS, because legacy on-prem security doesn't matter, right?
If Google and Microsoft can't get these things right, why are we working one yet more protocols that are even more complex!?
The web is full of middle boxes with crazy limitations. And OAuth2 is very good of triggering each one of them. They are also mostly unique, not under the control of the ends, and often transient, so they most often aren't even understood, the problem is just assumed unsolvable. That alone is a big limitation that stops people from using OAuth2.
That said, I have never seen a case where crypto data was the cause of the bloat. Its size is so small when compared to everything else that I'm not sure why anybody would even look at it. And indeed, the rationale I found on the site is about cryptographic agility... what is interesting because you will find plenty of people claiming that this is an anti-feature that will harm security much more than help.
Still, I'm not sure using references in the crypto data itself is a good thing. You will get more requests, more infrastructure dependency, more metadata tracking, for fixing the bloat of a minor (in size) part of the protocol, and getting cryptographic agility, that is a disagreeable feature at best.
Also, once they have references, why are they adding polymorphism too? Polymorphism is a hack that tries (but fails) to solve the same problem.
IIUC the 'JSON polymorphism' exists _to_ support handles - so that a field `foo` may either contain directly an object as data, or a string as a handle to that data.
Sorry, which bottleneck is that?
But I do use it.
2) is the exact opposite of centralization and 1) is basically equivalent to dynamic linking which, while "centralization" in theory, is generally considered a good security practice.
You have a point that centralizing auth is not a goal of OAuth. But it is what people use it for. As nice as it would be, nobody is creating an ecosystem of public auth services.
Here's the place to check it out going forward: https://datatracker.ietf.org/wg/gnap/about/
And here's the draft just promoted to the WG: https://datatracker.ietf.org/doc/draft-ietf-gnap-core-protoc...
My positive attitude about OAuth* may be showing.
As odd as it sounds, this one I can actually understand. I'm pretty sure the designers come loaded with painful experience on request header bloat. They may want v3 to support completely stateless requests, and would rather not transmit large public keys or possibly even client certificate chains on routine requests.
For those cases I can see the benefit of being able to say "look up my details from this $ObjectID". When everything related to session authorisation is behind a single lookup key, the data likely becomes more easily cacheable.
It's a perfectly valid tradeoff for environments where compute and bandwidth cost way more than engineering time. For the rest of us...
Not to mention handles introduce state, not allow for statelessness: not only the AS now has to keep a global state across all endpoints that may serve a given request, but also the client must keep a local cache of resource -> handle(s). Retry/restart logic has to be implemented, cache clearing logic must be implemented, state has to be kept between restarts of both sides, etc. This is definitely stateful.
Would you please send your comments to the working group?
I came to understand OAuth2 much better when I realized that it exists to make the lives of big companies easier, and to make the lives of small developers possible. If BigCo only offers an OAuth2 API, then developers will figure it out because they have no choice. And from the point of view of big companies, what matters is that they implement something that meets their needs, which they can pretend is a standard.
Ambiguities give big companies the freedom to do the different things that they want to do while everyone claims, "We're following the standard!"
{ "key": { "KeyHandle": "myhandle" } }
and { "key": { "FullKey": "myfullkey" } }
They could just provide a schema and nobody would have to implement anything of the wire end.
But I think the authors were focused on using the shorted possible serialized JSON, no matter the implementation difficulty cost and the inability to use existing schema/IDLs. Which in my opinion is terribad for what is effectively a critical security standard.