413   oauth.net

OAuth 3

Refreshing Comments...

On a quick glance, this looks like an implementation nightmare just waiting to happen.

Opaque handles everywhere (okay, that simplifies stuff going over the wire). Union types in protocol payloads - the spec calls these "polymorphic JSON", but the reality is you will need to branch on type of a given field. Worse, nothing prevents having two or more subtly different dictionaries in the same field, based on arbitrary/implicit conditions.

Subtle and surprising payload differences are pretty much guaranteed to introduce weird problems in the real world. And I'm not ruling out security problems either, because a bug in authorisation logic can easily generate tokens that are valid for wrong scopes.

Yeah, I really don't get the Handle concept - what is this attempting to solve? Dooes anyone know where can I read up on the design decisions that lead to this? This seems to introduce tons of implementation pains (statefulness, cache invalidation, 'polymorphic JSON', ...) while only seemingly benefiting a shorter wire format - but that's such a weird thing to optimize for.

EDIT: There's this [1], but it only makes me ask more questions. The only rationale I can see from that document is “it would seem silly and wasteful to force all developers to push all of their keying information in every request”. Which makes me want to throw out oauth.xyz and never look at it again, because that looks like the authors have some absurd priorities in their protocol design.

[1] - https://medium.com/@justinsecurity/xyz-handles-passing-by-re...

This is a solution to a problem of their own making.

OAuth transactions are "big" because they allow the use of RSA keys, which are large. The keys would be smaller if they were simply opinionated and mandated a specific cipher, such as Ed25519 that uses much smaller keys.

Protocols like SAML, OpenID, and OAuth aren't. They're not protocols at all. They're protocol parts thrown into a bag that everyone can pull whatever they like out of. They support way too many cryptographic primitives, and have far too many pluggable features.

Just yesterday I had to deal with an outage because a large enterprise ISV's SAML implementation doesn't support signing key rollover! You can have exactly one key only, which means after a key renewal all authentication breaks. You have to do big-bang coordinated rollouts.

That is typical of the kind of stuff I see in this space.

Everyone gets every part of these protocols wrong. Google does SAML wrong! Microsoft fixed one glaring security flaw in Azure AD, but neglected to back-port it to AD FS, because legacy on-prem security doesn't matter, right?

If Google and Microsoft can't get these things right, why are we working one yet more protocols that are even more complex!?

I have personally faced plenty of problems caused by the OAuth2 large wire size.

The web is full of middle boxes with crazy limitations. And OAuth2 is very good of triggering each one of them. They are also mostly unique, not under the control of the ends, and often transient, so they most often aren't even understood, the problem is just assumed unsolvable. That alone is a big limitation that stops people from using OAuth2.

That said, I have never seen a case where crypto data was the cause of the bloat. Its size is so small when compared to everything else that I'm not sure why anybody would even look at it. And indeed, the rationale I found on the site is about cryptographic agility... what is interesting because you will find plenty of people claiming that this is an anti-feature that will harm security much more than help.

Doesn't that issue come from the fact that OAuth2 state mostly lives in GET URL data (redirects/callbacks) and request headers (bearer tokens), vs. POST body (which is something OAuth3 does seem to get right)?
Yes, and the move into references is a very welcome one. It will solve one of the large bottlenecks for OAuth use.

Still, I'm not sure using references in the crypto data itself is a good thing. You will get more requests, more infrastructure dependency, more metadata tracking, for fixing the bloat of a minor (in size) part of the protocol, and getting cryptographic agility, that is a disagreeable feature at best.

Also, once they have references, why are they adding polymorphism too? Polymorphism is a hack that tries (but fails) to solve the same problem.

I agree with your points :).

IIUC the 'JSON polymorphism' exists _to_ support handles - so that a field `foo` may either contain directly an object as data, or a string as a handle to that data.

What do you use instead of OAuth2 yourself ?
The alternative to OAuth2 is usually not centralizing auth (both kinds).

But I do use it.

How is OAuth centralizing auth? It's generally used for one of two things: 1) Single Sign-On - something that generally increases the security of applications under one organization, where authentication has to be shared one way or another, and 2) "social login" - something that takes a website from being its own and only auth provider to supporting multiple external providers.

2) is the exact opposite of centralization and 1) is basically equivalent to dynamic linking which, while "centralization" in theory, is generally considered a good security practice.

Humm. #1 is centralizing all of your internal auth into a single service, and #2 is centralizing all of the internet auth into Google and Facebook.

You have a point that centralizing auth is not a goal of OAuth. But it is what people use it for. As nice as it would be, nobody is creating an ecosystem of public auth services.

This used to be called TxAuth. You can find a lot of the discussions online through search:

- https://ietf.topicbox-scratch.com/groups/txauth

- https://www.ietf.org/mailman/listinfo/txauth

> shorter wire format - but that's such a weird thing to optimize for.

As odd as it sounds, this one I can actually understand. I'm pretty sure the designers come loaded with painful experience on request header bloat. They may want v3 to support completely stateless requests, and would rather not transmit large public keys or possibly even client certificate chains on routine requests.

For those cases I can see the benefit of being able to say "look up my details from this $ObjectID". When everything related to session authorisation is behind a single lookup key, the data likely becomes more easily cacheable.

It's a perfectly valid tradeoff for environments where compute and bandwidth cost way more than engineering time. For the rest of us...

But with the current spec, IIUC, it's up to the AS to provide handles for future client reference - so the burden of allowing for smaller requests in the future falls on identity provider software, not the client. And when a side 'MAY', experience says, they almost never do, unless it makes things simpler for them. And having to store extra, global data is something no-one really wants to do.

Not to mention handles introduce state, not allow for statelessness: not only the AS now has to keep a global state across all endpoints that may serve a given request, but also the client must keep a local cache of resource -> handle(s). Retry/restart logic has to be implemented, cache clearing logic must be implemented, state has to be kept between restarts of both sides, etc. This is definitely stateful.

Are you saying the handle concept in general? I have a theory that people convert to handles to "sanitize" apis that used to return a structure (and therefore an exploitable pointer).
This is sounding like a repeat of the ambiguity from the other OAuth specs, leading to bugs, non-interoperability, and security problems.

Would you please send your comments to the working group?

You say it like that ambiguity is a bad thing.

I came to understand OAuth2 much better when I realized that it exists to make the lives of big companies easier, and to make the lives of small developers possible. If BigCo only offers an OAuth2 API, then developers will figure it out because they have no choice. And from the point of view of big companies, what matters is that they implement something that meets their needs, which they can pretend is a standard.

Ambiguities give big companies the freedom to do the different things that they want to do while everyone claims, "We're following the standard!"

What's even sadder about this is that there are oven ready JSON serialization formats that support unions natively. Avro, for example, would serialize to something like

    { "key": { "KeyHandle": "myhandle" } }

    { "key": { "FullKey": "myfullkey" } }
They could just provide a schema and nobody would have to implement anything of the wire end.
Same with protobuf JSON serialization (oneof, enforcing exactly one field of many is set).

But I think the authors were focused on using the shorted possible serialized JSON, no matter the implementation difficulty cost and the inability to use existing schema/IDLs. Which in my opinion is terribad for what is effectively a critical security standard.

Lots of negativity around the polymorphic JSON types. I don't think this will be a problem at all in practice. It's fairly simple to limit the conditional that checks the type to exactly one place via a parsing step. This is basic OO - using factories to abstract conditionals. It will also mostly be handled by client libraries.
> Polymorphic JSON. The protocol elements have different data types that convey additional contextual meaning, allowing us to avoid mutually exclusive protocol elements and design a more succinct and readable protocol.

Yeah... let's not please.

The number of comments here specifically upset with this part of the current design is a bit discouraging, but not necessarily surprising.

Yes, many mainstream languages have near-zero support for Tagged/Discriminated Unions or Enums with Associated Data or Algebraic Data Types (pick your favorite name for the same concept). This is a limitation of those languages, which should not force a language-agnostic protocol to adopt the lowest common denominator of expressiveness.

Consider the problem they're avoiding of mutually exclusive keys in a struct/object. What do you do if you receive more than one? Is that behavior undefined? If it is defined, how sure are you that the implementation your package manager installed for you doesn't just pick one key arbitrarily in the name of "developer friendliness" leading to security bugs? This seems like a much more bug-ridden problem to solve than having to write verbose type/switching golang/java.

Implementing more verbose deserialization code in languages with no support for Tagged Unions seems like a small price to pay for making a protocol that leaves no room for undefined behavior.

To be clear, _many_ statically typed languages have perfect support for this concept (Rust/Swift/Scala/Haskell, to name a few).

> To be clear, _many_ statically typed languages have perfect support for this concept (Rust/Swift/Scala/Haskell, to name a few).

No they don't, at least in the way you're selling it. The "limitation" here is JSON which doesn't attach type information. You're going to have to implement some typing protocol on top of the JSON anyway which will face similar problems to the ones you raised (unless you do some trait based inference which could be ambiguous and dangerous).

If they were Enums/Unions over a serialization protocol like protobuf, maybe your case makes sense. Even then, Im guessing a large % of the OAuth 3 requests will go through Java/Golang libraries, so on a practical level this is a bad idea too.

I agree that having multiple different types of "object values" share one JSON key with no explicit "type" tag is asking for trouble with extensibility and conflicts.

That said, I think the constructive suggestion would be: "add a type tag to all objects in a union" (something suggested elsewhere in this thread).

Their "handles" can still claim "just a string" to save bandwidth in the common case, arrays can still represent "many things" and objects require "type" to be dis-ambiguous.

Most of the comments below don't mention the (real and important, but easily solvable) issue you've brought up however. They primarily fall into one of two buckets:

- It's hard to work with data shaped like this in my language (ex: java/go)

- It's hard to deserialize data shaped like this into my language that has no tagged unions (ex: java/go)

My biggest counterpoint to all of these complaints is: The fact that your language of choice cannot represent the concept of "one of these things" doesn't change the fact that this accurately describes reality sometimes.

A protocol with mutually exclusive keys (or really anything) by convention is strictly more bug-prone than a protocol with an object that is correct by construction.

A protocol which is cumbersome to implement in many languages. Hmmm what can't go wrong. Partial support, late support of extensions, bugs,...

IMHO: a very bad choice. Complicated basic and higher level elements of protocols are the death of them (remember SOAP). I follow the train of thoughts to not restrict yourself too much but if (eg) java or C++ cannot implement it easy, not a good idea.

Protobuf supports "oneof" which is also cumbersome to implement in these same languages but all of them support it (with some extra LOC and no exhaustiveness checking watching your back).

Java/Go/C++ are perfectly capable of parsing a "type" key and conditionally parsing different shaped data. If you make a programming mistake here, you'll get a parse error (bad, but not a security problem). The pushback seems to be that a Java/Go/C++ implementation adds LOCS and won't gain much by doing this extra step so lets make the protocol itself match match their (less precise) data representation.

FWIW there is work towards improving Java in this regard: https://cr.openjdk.java.net/~briangoetz/amber/pattern-match....

But is not that elementary OOP polymorphism? It all depends on the fact whether the type is annotated or whether it needs to be analyzed from the data by probing. And types annotation are present in protobuf parts I remember :).
> This is a limitation of those languages, which should not force a language-agnostic protocol to adopt the lowest common denominator of expressiveness.

It's an intentional decision made by those languages in order to focus on other things. If your intent is to be language-agnostic, then yeah, going with lowest common denominator concepts is exactly what you need to do. If you just want to write a Haskell auth implementation using your favorite pet language features, then write a Haskell auth implementation.

It's not the same as union types, but you can also often achieve polymorphic serialisation with any OO language, through the use of interfaces.
I'm in vigorous agreement here--polymorphic JSON is far more difficult to deserialize safely, and every instance I've seen of this in the wild has been the output of careless or deeply ignorant makers.
I'm not sure I get this, does the data type change depending on context? Is that what they mean?
It means sometimes you'll get:

    { foo: { bar: "baz" }}
and other times you'll get

    { foo: "something else" }
Good luck!
Polymorphic JSON is such a PITA for strongly typed languages.

    var data map[string]interface{}
    switch t := data["foo"].(type) {
         case string:
         case interface{}

imagine that for every key...
In my humble opinion, Golang is pretty verbose in a bad way when it comes to JSON. Rust and the crate serde_json are also strongly typed and it's a lot better.
Yeah, having done this, it's extremely verbose if you have all your error handling in there due to the verbosity of dealing with JSON and explicit return values instead of exceptions. By far my greatest annoyance is that []interface{} cannot be directly cast to []concreteType. Having to make a sized slice and type assert each value is annoying. Require validation of the values for even more "if err != nil" fun.

Many Go advocates seem to consider the verbosity a feature, because it's explicit and forbids any clever-but-confusing tricks.

How would you express deserializing these 'polymorphic JSON' objects using serde/Rust?
Using a rust enum I guess.
You're right, this does seem to work [1]. I wasn't aware that serde would attempt to deserialize multiple enum variants until something matches.

[1] - https://serde.rs/enum-representations.html#untagged

Yeah that's a rather common scheme out there so Serde does provide built-in support for this deserialisation. Probably better for deserialisation performances to use properly tagged enums, but if you don't have a choice Serde's got your back.
> XYZ’s protocol is not just based on JSON, but it’s based on a particular technique known as polymorphic JSON. In this, a single field could have a different data type in different circumstances. For example, the resources field can be an array when requesting a single access token, or it can be an object when requesting multiple named access tokens. The difference in type indicates a difference in processing from the server. Within the resources array, each element can be either an object, representing a rich resource request description, or a string, representing a resource handle (scope).

This is horrible.

This is definitely not a protocol dreamed up by, say, Java developers.

Obviously it's doable in Java, but I'm hard-pressed to imagine a Java developer would think of such a thing.

The only time I've ever encountered an API that used it extensively, it was done by a company that does all their implementation in Java.

My best guess at what happened, based on the shape of the API, is that they implemented it by taking their pre-existing domain model, which had a fairly deep subclassing hierarchy, liberally sprinkled some annotations from com.fasterxml.jackson.annotation, and dumped the result straight onto the wire.

You could absolutely do an object-oriented codebase where two different subclasses have fields with the same name and different types, and, depending on how you structure your code, it might not be too painful. And it's fairly easy to imagine someone serializing a structure like that to JSON without ever meditating on the fact that JSON won't retain the all the type information.

Ironically, the end result was an API that is nigh-impossible to consume from Java. I ended up writing a façade in Python.

I've also seen little bits of this happen in the internal APIs at my current company. Also a Java shop, also a result of trying to directly connect an internal object model to the API. I've never seen it done in an API implemented in a dynamic or functional language.

I hadn't considered the one-way nature of dumping Java classes to JSON.

Mainly because I work with classes that are serialized both to and from JSON, rather than having Jackson annotations added much later.

I can see it now. Horrifying! I'm sorry for your troubles.

My take-away is this: Protocols should always be defined independently of any existing code.

The code-first approach only works well when you're doing something self-contained. Which, an API, almost by definition, is not.

I generally try to design APIs from a consumer perspective, and then usually I end up with something RESTful. Let the server do whatever it has to do, you know?
Lovely - I can't wait to see how horrible something like that is to implement in a language like Go.
You have to implement UnmarshalJSON. Between each attempt to deserialize into a possible struct, be careful to return on errors that are not JSON serialisation errors (for example caused by reading from the underlying Reader, etc.)

It's ugly and verbose but there is no need to use empty interface.

Yeah, pretty much exactly that.

The only time I've had to deal with it in the wild was a terrible experience. As a consumer, you couldn't make any decisions with confidence without first making a careful study of the documentation. For every. single. decision.

YAML has !Tag syntax for expressing polymorphism. Shame that nobody uses YAML because the standard is so complex, and shame that AppVeyor uses a single-element key-value mapping instead of a tag for this purpose...
Yeah... This is the same sort of thing you see with, say, ActivityPub, that makes it a massive pain, if not totally impossible to implement it in a statically-typed language.
I'm personally more excited about this year's OAuth 2.1 draft[1], since it aims to reduce the number of RFCs one needs to review and understand in order to implement a best practices OAuth client.

[1] https://oauth.net/2.1/

Yes, I've been keeping an eye on this. Haven't seen much action on this for a few months. I like that they are formally deprecating the implicit grant!

If folks are interested in the nitty gritty, I wrote a blog post a few months back: https://fusionauth.io/blog/2020/04/15/whats-new-in-oauth-2-1...

And this is a great podcast with one of the authors, Aaron Parecki, talking in detail about the changes: https://identityunlocked.auth0.com/public/49/Identity%2C-Unl...

There are 2 major problems that must be addressed:

1. Using OAuth to sign-up often means disclosing private data you can (and would normally prefer to) keep secret if you go the bare e-mail sign-up way. E.g. contacts list, exact date of birth, etc. - This is why I (as a user) stopped using OAuth for new accounts.

Kind of the same used to apply to e.g. Android apps. I mean the "give an app all the permissions it wants or gtfo" anti-pattern which ought be abolished. The user should be allowed to continue after denying/revoking access to any (but absolutely essential for the very function) data silently or manually specifying whatever values they want.

2. It isn't always easy to decouple an OAuth-based account from the social network account, especially in case you loose access to the latter. - This is why I (as a user) migrated all OAuth-based accounts I had to the good old e-mail way.

What you are talking about is OpenID Connect, not OAuth. It does use OAuth 2.0 under the hood, but they are two separate protocols.

OAuth is an authorization protocol. It can be used (for example) to give Facebook access to your Flickr photos without having to give out your Flickr username and password to Facebook or share API tokens, and have a standardized way to revoke access when you realise Facebook scraped all your private photos.

Actually, OpenID Connect is an SSO product. Whilst it can be used to federate your auth to other providers, you're probably thinking of OpenID, which is "social login".
No, parent commenter is correct: OpenIDConnect is an extension protocol that adds a user-authentication (user metadata) layer on top of OAuth 2, which is a bare authorization protocol (access tokens are opaque and don't say anything about the user).

Besides the similar names, OpenIDConnect has virtually nothing to do with the older OpenID protocols. Old-style OpenID has been deprecated and removed by almost all web properties today (e.g. https://meta.stackexchange.com/questions/307647/support-for-...)

I'm not really sure I understand the is complaint then?

So the problem the OP is worried about is a SaaS provider using OIDC to federate to corp SSO and leaking data such as that within the id_token?

Otherwise, what's the leak here?

iiuc, the complaint is still valid -- it's just that OIDC is what causes the attributes to be in the flow, not OAuth that causes the attributes to be in the flow.
Your first point is actually not a OAuth issue, but how the provider designed it. You can build an OAuth provider that don't need to disclose anything(not even email)
Is this even a provider issue? If an application asks for scopes=[profile] then the OAuth provider has two choices -- categorically deny it, or ask you to authorize it. They all ask you to authorize it, and you can say no, and then the app doesn't work because the developers decided that you can't use their app unless you give them your profile information.

The app could easily ask you to check a checkbox next to each scope, and then write separate code for each combination of checkboxes. They decided not to do that because it's probably not worth your business if you don't want to give them full access. (Honestly, I click a lot of things on HN that ask for way too many scopes, and then I close the window and forget what it was. But the calculation was done -- they don't need me as a user or customer. I can live with that.)

I guess what people want is an IDP that will give applications fake data when you deny a scope. But no application developer wants to deal with that complexity, so they'd never integrate a provider that does that. (They probably moved away from email+password because of all the fake emails that people provide.)

On the other hand, it's mandatory for iOS apps to use Apple's sign-in which auto-generates a fake email address for you. So I suppose some progress is being made. (I have an iPhone but I've never seen this supposedly mandatory OAuth provider. I only know about it from reading HN. So maybe it doesn't actually exist? I have no idea really.)

Auto-generating a single-use profile, or letting you choose a pseudonym, is absolutely a compromise that more identity providers should implement.
Yes it is really provider issue. Apple implemented it and so can any other provider.

I also don't think people want an IDP that provides fake data when you deny a scope. That's a bad implementation IMHO. When you say no that means you don't authorize access for that scope, not that send fake data. Applications should deal with it.

You are fighting conflicting constraints, though, and that's the underlying problem. Application developers won't use an IDP that protects user data. They want that data, that's why they wrote their app! Because nobody would use such an IDP (at least not without being forced to in order to be on a large platform), nobody will write such an IDP.

I'm actually working on an open-source IDP in my spare time, and to me this sounds like something to seriously consider doing right. I appreciate the idea and the discussion. I doubt anyone in the real world will care, though. (Sometimes you need to get the early adopters that do care about these things, though :)

As a user I don't really care about the way they build it. I care about the spec to deny them forcing me to disclose the data.

I once tried to sign-up with Google an it asked me to allow (with no option to deny but continue) to share my specific personal details. I've cancelled and never used this technology ever since. I didn't have to specify the same details (which Google was going to share) when signing-up with an e-mail address.

The spec should discourage sharing details beyond necessary, prevent any details from being shared silently and ensure user can always deny and continue.

But there's nothing in the spec that requires you to disclose that data to begin with.

And there's nothing they could write in the spec to deny that besides a perfunctory "please don't do that" which companies could ignore without consequence.

Sure they could. What you allow or deny would be enforced by the identity provider. The relying party simple would not receive the data and could not access it.

However, that’s really about OpenID, not about OAuth.

These are treated as permissions in the AAD OAuth model. Your issue seems to be with the Google and Facebook implementations, not the spec.
The spec could say something like "a client may ask for extended information but can't demand it unconditionally and must gracefully handle situations when access to particular fields is denied".
But that's already possible, right?

The problem is that you can't make _everything_ optional, or else the user can deny everything and the application then has to tell the user "You denied X, but we really need it to proceed. Try again...", which is a definitively worse experience than having the grant request say "here's what this app is asking for".

This is anticipated by scope requested by the client being able to be ignored by the authorization server. This appears in the AAD flow for the user as a list of toggles. The application has to handle the case where the scope is less than what is listed - this is all in section 3.2. Actually defining what data or permissions is bound to what scope is rightfully beyond the goals of the specification.
OAuth does what you want, but also does what you don't want.

This is like Android allowing apps to ask for permissions they don't "need" because, well surveillance and user data brokerage is the business model that most are info and the reason many apps are "free" or "cheap" in the first place, crowding out more honest business models.

Maybe there could be an initiative for apps and sites without predatory practices, like "apps/webs needing no personal info from you"

These are two great discussions to raise in the working group communications. If you go through the link that they've shared, you can sign up directly and participate in the conversation about how they architect the v3 rebuild.
OAuth isn't solely about social networks. There are a number of corporations that use it for delegated authorization. A financial aggregator and a bank, for example. A large number of SaaS providers support G-Suite logins for enterprise customers as well.
Oauth2 / OIDC is essentially next to SAML the only serious identity provider protocol in the single sign on space.

Ignoring LDAP and Active Directory for now.

Apparently, still not going in the direction of OpenID where the end-user specifies (in an open-ended way) his authorization provider instead of choosing from a handful of big well-known providers (Google/Facebook/Github) handpicked by the relying party.

Not surprised, but still disappointed.

OpenID and OAuth are different things though? Sure, OpenID Connect is built on top of OAuth 2.0, but OAuth 2.0 is a general authorization solution. You couldn’t just interact with arbitrary APIs so there’s really no point in creating a bring-your-own-API-server thing.
It's different things though, or?

OAuth is about getting access to something, and usually part of that is proving to some authorization server that you are you (ie what OpenID is about), no?

Do you mean you'd like OAuth to tackle the "you are you" part as well?

That's probably like 30% of the uses of OAuth (e.g. granting Azure Pipelines access to your GitHub repos). 70% is just outsourcing identity and authentication (log in with Google / Facebook / etc.) In those cases the only data they access is your email, profile image, etc.

As a website developer I would definitely appreciate something like OpenID but actually usable/popular. Having to implement a ton of "log in with"s sucks, as does implementing email based login.

> Having to implement a ton of "log in with"s sucks, as does implementing email based login.

This is kind of auth0's--but also most security token service things--raison d'etre: your app trusts just one authority and supports just one protocol, shunting any unauthenticated users to it, letting it handle the transaction with trusted identity providers.

I was excited about OpenID around 2010. I wasn't aware it was still possible to use; most of the services I've looked at either support OIDC (built on top of OAuth) or SAML.

How could the specification support letting the end-user pick their authorization provider? Should the RC suggest the AS instead of the RS doing so?

For everyone that has comments and questions, the best way to get them discussed and considered is to join the IETF working group mailing list and starting to participate.

As a student I've previously sat on a couple of mailing lists for the academic benefit of learning from some really smart/dedicated people. Joining and participating is open and just requires you to sign up. The signup link is on the announcement page above ^

I wish they wouldn't use mailing lists. Keeping track of threads/topics must be a nightmare.
mailing lists are actually great for keeping track of threads and topics.

and are pretty much the only open technology that is sufficiently technology-agnostic and interoperable.

what should be the alternative, facebook comment threads?

The alternative is newsgroups and NNTP.

They are open, technology agnostic, interoperable, and most news readers are way better at threaded discussions involving more than two people than almost every email client.

You sound like someone who never used GitHub/GitLab/Jira/Any forum which offers categories/topics/threads and their respective issue tracking/organizing features.
GitLab issues?
You can easily branch mailing thread into multiple separate ones (happens regularly on gnu mailing lists for example). Compared to that, threading in all of the git forges sucks hard.
You can easily search for open issues, tag/label them, mange milestones, you know, project stuff. Plus you get a web interface that's a bit more user friendly than mailman's. You can see what's going on, how much open issues there are.

How hard it is to open N separate issues and link them to the original one? It's exactly as much effort as sending new emails.

TC39 uses GitHub: https://github.com/tc39/

There's also the IETF datatracker, and various sites cobbled together to show the mail threads (eg. https://mailarchive.ietf.org/arch/browse/acme/ ) and states of various RFCs. And basically to manage work.

Email is great, and it's enough for IETF workgroups, but it's just a communication channel, it's far from an efficient tool to organize (track, show, share, plan) work.

They are not.

Gitea, GitLab, GitHub, Bitbucket, Discourse literally everything else similar to those is more usable than the pieces of turd that mailing lists are. I instantly assign a negative score in my mind to all projects that still find them in any way usable.

> I instantly assign a negative score in my mind to all projects that still find them in any way usable.

Sounds like the filtering works!

Yeah, I can be grateful, it's like a big red warning color that the project members are gatekeepy and don't wish to keep up with the times.

Don't come crying when you can't find any contributors, it's your own undoing.

I don't think it's gatekeepy to say that not everything has to be attractive to literally everyone. We are not robots, we are individuals with personal preferences and comforts. Those that grew up with mailing lists naturally find it more comfortable for them. I'm sure those who grew up with e.g Github feel the same way about that.
> I don't think it's gatekeepy to say that not everything has to be attractive to literally everyone.

It really is gatekeepy to keep something that is only attractive to a very select group of people.

> Those that grew up with mailing lists naturally find it more comfortable for them.

Yeah, which is exactly why I called it a massive red flag. It's the unwillingness to suffer short-term discomfort over long-term benefit of new contributors and overall better user experience and accessibility.

Depends on your email client and whether people start a new thread for each topic, I think. Email has more than enough metadata to be displayed like reddit threads, or slack/zulip messages, etc. if you so choose.
Benefit of a mailing list is that you have a wonderful distributed history of a thing, don’t have to own & maintain a large website/app, and everyone has some way to have and use email for free
I really wanted to like OAuth but implementing it is a nightmare!

When can we just have client side certificates? That would be a great way to deal with most of the problems that emailing a "magic login link" (or just normal email based accounts) doesn't solve.

> I really wanted to like OAuth but implementing it is a nightmare!

Not only is the spec itself challenging, it leaves enough ambiguity and rough edges that most providers end up extending it some way that makes it hard to standardize. Most commonly: how to get refresh tokens (`offline_access` scope, `access_type=offline` parameter?), and how/when they expire (as soon as you get a new one? as soon as you've received 10 new ones? on a set schedule?)

And that's not to mention how OAuth gets extended to handle organization-wide access. Anyone that's dealt with GSuite/Workspace Service Accounts or Microsoft Graph Application permissions knows what a pain that is.

This is exactly why I built [Xkit](https://xkit.co), which abstracts away the complexity of dealing with all the different providers and gives you a single API to retrieve access tokens (or API keys) that are always valid. Not everyone should have to be an OAuth expert to build a 3rd party integration.

Are you talking about RFC 8705 ( https://www.rfc-editor.org/rfc/rfc8705.html )? I've researched this a bit and heard that deployment is problematic.

From a brief search, it looks like let's encrypt doesn't have great support for them ( https://community.letsencrypt.org/t/can-i-create-client-cert... ) so you are stuck setting up a private CA?

Have you set up client side certs? I'd love to hear your experience if so.

BTW, I'd defer implementing OAuth to a library or specialized piece of software (full disclosure: I work for a company providing this). There are a number of options, paid and open source out there.

> Have you set up client side certs? I'd love to hear your experience if so.

Entire Estonia and a few other countries use them daily. For logging into banks, Craigslist-equivalents, online stores, service providers etc. etc.

Interesting! Why does the distinction of a country matter here? I mean - why would using client side certs be something a country as a whole uses, as opposed to a certain type of company or something? Does it have to do with some sort of national firewalls or anti-encryption laws?
Some countries implement verified authentication schemes for their inhabitants that can be linked to both government and private services.

I.e. you have one login to use when filing taxes, getting health data, social security, interacting with your local school etc.

It has to do with widespread deployment and a central trust authority - that the specific citizen holding the specific citizen's cert. Service providers don't have to deal with the massive pain that is identity verification, there's no cumbersome stuff like faxing someone a gas bill to prove their identity.
In my opinion, client certificates are great, you can let existing crypto infrastructure deal with the problem of "who is this user?".

The biggest problem is around revocation. You need to have some central revocation list and make sure that all of the users of your PKI are keeping that list up-to-date in production, which can be difficult if you do not plan for that from the start.

Not sure if you're referring to a particular spec or something, but we used client certificates as a 2nd factor to control access to an extranet web app, almost 15 years ago, long before OAuth, and when 2FA was only just beginning to come into existence.

From a security standpoint, it's pretty great. But the reality of generating keys and signing and distributing certificates was horrible, and our users were confused and hated it.

How would you solve key generation even now - assuming the client generates the key, is it locked to that browser on that machine? How do you generate a CSR (certificate signing request)? How do you send the signed certificate to the user? How does the user install the certificate? Again, does that mean the user can only access your app from the machine they installed the certificate to?

PKI is hard, mainly because of the distribution problem.

I"m not sure exactly what client-side certificates means here, but I have long wondered why we can't just use public key/private key authentication for most logins. Is it the same?
Before Chrome, all popular web browsers had a user interface for installing client side HTTPS certificates for user authentication, and a very small number of websites supported it. After Chrome became popular, those sites were forced to switch to a different form of second factor authentication, and it's fallen almost completely out of use.

Part of the reason was that the user interfaces for installing certificates were terrible, and websites needed to have guides on how to use it in each browser.

Thanks. I’m still not clear what the authentication method is, but I don't see why we can’t have a one click browser button “give this site my public key” and another “authenticate to this site with my private key”.
Who gives you the private key? Is it generated on the device? How do you move the keys to your different devices? I can end up working at any of 20 computers on a given day, not counting my personal devices.
Not sure what the best solution is, but some thoughts. First, you definitely want a different keypair per device.

One approach is to just supplement passwords. You could use a password (2FA, etc) to log in, then the site gives you the option of adding that device's public key and from then on you can log in on that device automatically. The site would maintain a list of public keys associated with your account, just like github does for repositories.

Of course, if you don't trust those 20 work computers, you wouldn't want it set up so that anyone using them can log in to all your accounts. One thing the browser could do is password-protect your private key, so you have to enter the master password when you start the browser, and as long as you remember to exit out of the browser, the next person to use it won't be able to use your logins.

> Before Chrome, all popular web browsers had a user interface for installing client side HTTPS certificates for user authentication

Chrome has an interface for it.

Last I checked it could only install from the filesystem, and not directly generate a key and install a certificate through the web. Do you have an example site where this works with Chrome?
Has the goalpost been moved? Upthread you compared Chrome to "all popular web browsers [which] had a user interface for installing client side HTTPS certificates for user authentication". I just opened up Firefox, and found a very similar menu to Chrome's: the only option was to "import" a cert from the filesystem. I agree that we should expect more from our tools, but has any popular browser ever allowed the user to generate a new cert? If one were to do so, how would the generated cert be connected to PKI -- who would sign the cert and how would they do that?
Yes, browsers other than Chrome can generate keys, submit the public key to a site you're logged into and install the certificate you get in response (usually after a second factor verification). I am not aware of any site that still does this, so I can't show it to you. Skandiabanken in Norway used to do it before Chrome.

You won't be able to see this in Firefox in any way other than visiting such a site.

Now I'm curious. This seems like a procedure that would need to be precisely defined. Is there a standard protocol for this? Does it have an RFC or similar I could read? If nothing else, it would be nice to have a short bumper-sticker "Chrome destroyed protocol X!" complaint.
I did some digging, and I believe this was implemented with the <keygen> element and the generateCRMFRequest and importUserCertificate JavaScript functions.


Thanks for the information. I don't remember ever learning anything about <keygen>. It looks as though most popular browsers (not IE; shocking!) supported it in the past, but most have now removed that support. [0] Perhaps there were some security or usability issues with this functionality? (Off the top of my head, if user certs are a single factor how do we ensure that desktops with more than one user don't install them?) ISTM the PKI world is moving to more short-lived, or even ephemeral, certificates. A complicated user-driven certificate generation process in the browser doesn't really fit that trend.

[0] https://developer.mozilla.org/en-US/docs/Web/HTML/Element/ke...

You can also use external enclaves/smart cards for key storage.

The specific feature you're now describing doesn't really exist in any browser, mostly because it kinda negates why people'd use PKI for.

This is exactly how TLS client certificates work - except that instead of the server storing the public keys of clients, the clients present a cryptographic proof generated by the server/CA that they represent some identity (ie., a certificate).

They normally store the User Principal Name from the cert, and then use the public/private key as part of the connection. Specifically, the connection is negotiated after the client sends the public client certificate, and uses it as part of the key exchange.

It doesn't necessarily need to store the public key, but it does need to store which certificate goes with which account. And the certificate is validated by checking that it's been issued by a CA the server trusts.

The server doesn't need to store the certificate, or even a mapping from certificate to identity. Just retrieving information encoded in the DN or SANs of a certificate presented by a client is enough to tie the connection originator/client to an identity. I mean, it's a design decision, whether you want to have a layer of indirection there - but keeping it without one allows TLS client certificates to be fully stateless, and be used across multiple backends that do not share any session/mapping store between them.

In addition, if I'm being picky, TLS 1.3 changes how client certificates are used, and they are now not part of the initial handshake.

You're familiar I'm sure with your browser authenticating that a TLS certificate is within its valid date range and assigned to the hostname of the server? You're probably also aware that your OS and/or browser have a list of certificate authorities one of which must have signed the certificate (or via a chain of CAs from a trusted root, with each CA cert signed by one closer to the root). Client certs work the same way except it's the server verifying all of this for the client (browser, mail user agent, whatever).

At $work we have several systems in which the server only accepts requests, or only accepts certain kinds of requests, from clients with client certificates with specific restrictions. Depending on the application and its authN/authZ needs, any of the solutions I'm about to mention might be combined with some combination of a username/password, with a time-based token, a JWT, IP range restrictions, an API key, or whatever else in addition to the client cert requirement - or sometimes the cert is sufficient by itself. Some just trust anything that was issued by the right CA and is in its proper date range. Sometimes we also verify that the certificate matches an assigned hostname of the client. Sometimes we trust certs by the right CA to connect, but parse the hostname out of the cert and check whether that client's hostname or the subdomain it's in has authorization to do certain things. Semantic hostnames might look long and confusing at first, but they can be used very easily for things like that. Semantic host names and naming schemes could be its own article.

This isn't a general use case for the general public because of deployment headaches. Which CAs do we trust? Is it the same as those issuing server certs? Will services be set up especially to issue client certs? Who's supporting the users to get the certs installed, many of whom enter the wrong email when signing up for services? We can do this in a corporate network pretty easily. We have automation systems for servers. We have another, different automation system for administration of Laptop, desktop, and mobile client devices. We just put what cert we want everywhere we want it.

A big problem I see with client certs and the general public is multi-device users. If you're logging into Google from your home desktop, your work desktop, a couple of laptops, a tablet, and a phone that's one email address but half a dozen different certificates. Some applications, especially cross-platform ones, insist on their own certificate store even if the OS provides a shared one. So for mail, a couple of browsers, and two other native apps, congratulations that's maybe two dozen. One can export and import client certs, but there's no simple way to get less technical end users to do that. So do we make it easier to configure multiple client devices and all their applications with the same certificate and key? Are end users going to remember to update them all when one is lost in a breach or it expires? Or do we expect all the service providers to trust multiple certificates signed by multiple different CAs for each user, then have the user upload the public (but not the private!) part of each cert/key pair to all of those services to say they should be trusted? Or does every service require its own CA to sign your cert for its own service, so you need an Apple cert, a Google cert, an Amazon cert... ad nauseum?

Tools like Bitbucket or Gitlab let you upload your public SSH key in the web UI to provide auth for the git repos. You can also have (hopefully with separate keys) automated applications that interact with git auth against a repository or all the repos in a project. That's the sort of interface one might expect a web application to offer for TLS certificates. *

* A certificate is basically the public key portion of a public/private key pair that's been signed by some CA. Preferably that CA is a broadly trusted one, except in very particular circumstances.

Thanks, this explanation and discussion is very helpful. It does confirm how I suspected things to work.

I have trouble understanding the need for the "signing" part of client-side certificates. Currently if I create an account at a website with a username/password, there's no need to get my account signed by a trusted third party. So why not let me create the account with a username/publickey instead? Why does a third party CA need to be involved?

And actually (as I mentioned in another post just now), one thought is to have keypairs supplement passwords rather than replace them. Basically when I move to a new device, I can still log in to the website with a password, and then the site will give me the option to add the device's public key so I can seamlessly log in automatically next time.

Ideally they would run some kind of CA and users would generate keys local to the device with some kind of alternate authentication when setting up the device initially.

Generating keys is cheap and the fact that your key could be tracked across services is a problem you'd want to avoid upfront. This is already an issue with things like SSH where you can finger-print devices when they present their public keys.

When signing up the user could be given the option to enable alternate authentication via FIDO2, Password, or Passwordless (email). Otherwise authenticating another device works by approving a new device from an existing one.

> Ideally they would run some kind of CA and users would generate keys local to the device with some kind of alternate authentication when setting up the device initially.

This presents a bit of a chicken and egg problem for how to secure the initial signup. Most services now use "control of email address" which has its own issues.

> Generating keys is cheap and the fact that your key could be tracked across services is a problem you'd want to avoid upfront. This is already an issue with things like SSH where you can finger-print devices when they present their public keys.

There are concerns about tracking, but that can be done without a private key. There are pros and cons for both singular and multiple keys one of the pros for a single keypair being your keypair does identify you as you. That's bad for tracking but it's good for more trustworthy authentication. Ideally, for some uses you could as a person get a signed personal cert and it'd be as good as government ID.

> When signing up the user could be given the option to enable alternate authentication via FIDO2, Password, or Passwordless (email). Otherwise authenticating another device works by approving a new device from an existing one.

A good first step for reused or unique-per-service or even unique-per-service-and-device keypairs would be to allow a user authenticated by password or whatever to upload a public cert (possibly via web form) and enable access to the account (or portions of the account) going forward only to sessions initiated with that client cert and key.

S/MIME handles some issues of key propagation pretty simply in that a sender, A, signing a message is sufficient for the recipient, B, to then send encrypted email to B. The initial S/MIME installation of the user's own S/MIME certificate is still more involved. However, it might be as simple with web apps and other apps that use a web or web-like remote API to have an option to auto-trust certs going forward that are used to log in to the account by other means (like password and Yubikey).

Just spent a month during lockdown going through all the RFCs and specs for oauth 2.0, bearer token usage, openid connect, various extensions and a couple of software implementations...

... Great.

After glancing at how OAuth 3 is going to work, I think your newfound knowledge will be good for quite a while. OAuth 3 looks like a mess right now.
While we're there, could we please change the name to something that is actually pronounceable and reasonably understandable on a phone call? (especially with non-native English speakers)

"OAuth" is such a terrible name. It sounds like a silly problem, until you've been through a number of calls where you had to explain to someone that this is what can be used for integration. A fair percentage of such calls end with no understanding of what is being talked about.

how are you pronouncing it? I'm fairly certain the correct pronunciation is two distinct syllables (almost word level separation) "Oh" "oth" it should never have one syllable and sound like "oath" or "oh ah ooth". While i'm sure there are some languages where the "oth" is an odd phoneme it's pretty hard to confuse "Oh" "oth" with much of anything else in English.

Sure people might not know about it, but there are tons of tech things people don't know about. That's a separate issue.

It's more that any word where one has to slow down and insert extra time between syllables so that they are distinghishable and intelligible is a "poorly designed word". Otherwise, it starts blurring due to the lack of a consonant.
Such as your username, for example? Shouldn't you put a consonant between the "y" and the "o" to make it more intelligible?

English is full of words that shift between vowels without a consonant. OAuth might be ugly, but it's hardly bending the rules of the language.

My username is a dead meme that is generally used to ironically mock oneself as stupid. Also, y is only sometimes a vowel for a reason; it's not the same open-mouth sort of sound as others. I think there's a big difference than that and marketing a serious security standard.
Fair enough. :) My real point, though, is that vowel transitions are inescapable in English, and so I don't think we ought to get overly worked up about them. And in fairness, this is just one more marketing langauge abuse on top of a giant heap of marketing language abuses -- OAuth is standing on the shoulders of giants here. How should a non-native speaker pronounce "Flickr" or "iOS" without any guidance?

(And we geeks have our own sins to atone for -- how to pronounce "/etc/", "/usr/bin", or "TTY" for that matter?)

On that last one, I'll share a quick story. I once worked with a genuine Unix greybeard on a remote project. He was tasked with debugging my terrible phone-home code, which was stored on a computer that I had couriered to him. The first time we ever spoke on the phone, he kept talking about "titties"... "this titty" and "the other titty". I'm sure that it was natural to him, but it took me a long moment before I realized he was talking about TTYs!

I'm slightly confused, is it going to be called xyz? Or OAuth XYZ, or Oauth 3?

In either case, I am excited about it, I do hope it will be easier to use as well.

As Justin Richer writes [1]:

And to be clear, I don’t actually care if the new work is called OAuth 3.0 or TxAuth or some other name, but I do think that it’s a fitting change set for a major revision of a powerful and successful protocol. We need something new, while we patch what we have, and now’s the time to build it. Come join the conversation in TxAuth, play with XYZ, and let’s start building the future.


[1] https://medium.com/@justinsecurity/the-case-for-oauth-3-0-5c...

Same here.

If anyone from the workgroup is reading this: please clarify in the first paragraph that XYZ is like a 'working title' for OAuth 3.

XYZ was a title of some early drafts. The current working title is GNAT. It might stay with that name, it might be decided that it'll be called "OAuth 3", it might be abandoned
His "The Case for OAuth 3.0" article [1] is behind the Medium paywall. That's utter bullshit when arguing the case for a core internet protocol. Medium has ruined text like Pinterest has ruined images.

[1] https://medium.com/@justinsecurity/the-case-for-oauth-3-0-5c...

Oh god another decade of this bullshit
My thoughts exactly. It comes across as an organization creating technical debt to justify its own existence.
that, and that some widely used service is going to be on oAuth 1.0, 1.0a, 2.0, some non-standard version, and 3.0 and these over-engineered-for-a-seemingly-good-reason-but-never-good-enough authentication methods won't have a relevant library that makes it easy to integrate with
I really want to cheer with hope, but I'm screaming inside. My crappy app auth code needs to stay buried...forever. It's almost October 31, so definitely filing this under scary.
ok, that's ridiculous: "The Case for OAuth 3.0" article linked - becase I'd like to learn what problem OAuth 3.0 would solve - is behind medium paywall.

Although this fact alone might even tell me enough of OAuth 3.0.

This document just graduated to the full WG[0][1]. So this isn't a full fledged, ready to implement, draft.

I've been doing some research on this for an upcoming presentation and it seems this was a union of the design ideas of two draft documents TxAuth and OAuth.xyz, which means there's a few issues that need to be resolved. I'm sure they'd welcome respectful feedback.

From the WG's charter[2], they are looking for feedback and comments and expect last call for the core protocol in July 2021.

It's still very much a work in progress. I counted the TBDs and "Editor's notes" and found an average of one of these "TODO" markers per page of the draft.

I'm excited about the more modern developer ergonomics (using JSON is a step up from using form params), the ability for an RC to request user info at the same time (folding in some of OIDC), and fact they've explicitly built interaction extensions into the model. OAuth2 often assumes a browser with redirect capabilities and there are some inelegant solutions that arise from that[3]. Still a lot of things to iron out, for sure, though.

That said, I think OAuth2 will still be common 3 years from now, and if OAuth2 satisfies your needs, you aren't forced to move on to this new, explicitly not backward compatible[4], auth protocol.

[0]: https://www.ietf.org/archive/id/draft-ietf-gnap-core-protoco...

[1]: https://mailarchive.ietf.org/arch/msg/txauth/UkvrBXkMk9YMl7m...

[2]: https://datatracker.ietf.org/wg/gnap/about/

[3]: https://fusionauth.io/blog/2020/08/19/securing-react-native-... shows that you have to have a redirect with a custom scheme for a mobile app. Seems weird to me.

[4]: "Although the artifacts for this work are not intended or expected to be backwards-compatible with OAuth 2.0 or OpenID Connect, the group will attempt to simplify migrating from OAuth 2.0 and OpenID Connect to the new protocol where possible." - https://datatracker.ietf.org/wg/gnap/about/

> using JSON is a step up from using form params

Why? It seems to me that I'm either writing Json.Serialize(loginParams) or HttpForms.Serialize(loginParams). Both are human readable and weakly typed. From a developer perspective, these seem almost exactly equivalent, just different.

Ah, in my opinion, it's better to be able to build an object then serialize it, rather than have to jam object semantics into form parameters (and then serialize them).

Here's a grant request from the draft:

       "resources": [
               "type": "photo-api",
               "actions": [
               "locations": [
               "datatypes": [
       "client": {
         "display": {
           "name": "My Client Display Name",
           "uri": "https://example.net/client"
         "key": {
           "proof": "jwsd",
           "jwk": {
                       "kty": "RSA",
                       "e": "AQAB",
                       "kid": "xyz-1",
                       "alg": "RS256",
                       "n": "kOB5rR4Jv0GMeL...."
       "interact": {
           "redirect": true,
           "callback": {
               "method": "redirect",
               "uri": "https://client.example.net/return/123455",
               "nonce": "LKLTI25DK82FX4T4QFZC"
       "capabilities": ["ext1", "ext2"],
       "subject": {
           "sub_ids": ["iss-sub", "email"],
           "assertions": ["id_token"]
(Not all of the object keys are required, FYI). The ability to have resources be a rich object (as opposed to a string) and to support multiple resources in one grant request seems to me to be a good thing(tm).
I wish they renamed the thing to OAuthorize. The current name is confusing. It's not for AUTHentication, it's for AUTHorization.
Isn't it other way around? Authentication is about confirming one's identity and that's what oAuth is used for. Authorization is about giving proper permissions and that's what happens after you get authenticated and has nothing to do with oAuth. Am I missing something?
It's really stupid because it is an authorization protocol that people use for authentication because if you have access to certain resources, it implies you're a particular user.
OAuth 2.0 is meant for Authorization. Multiple parties used if for Authentication, which is why a standardized Authentication layer was built on top of OAuth which is called OpenID Connect.
Can it be used in desktop apps or mobile apps?

I remember something that previous versions were for webapps only. Never used it though.

Edit: What's wrong with you people? You're downvoting questions now? I remember that OAuth forced the user to include client secret in app's binary. When extracted, everyone could impersonate the app. If you don't understand the problem then don't downvote.

These are false/misleading statements, not questions:

> I remember something that previous versions were for webapps only.

> I remember that OAuth forced the user to include client secret in app's binary.

This is not actually a problem with RFC 7636.

You always could use OAuth in apps just fine.

OAuth 2 was a design nightmare.

But by now it kinda consolidated into a usable best practices how to do it. But gathering them from the core RFC and all the extensions is a pain.

So what would be nice would a a updated RFC including all best practice and deprecating all things which turned out bad (or had security vulnerabilities).

OAuth 2.1 somewhat goes into that direction.

But IMHO OAuth 3 looks like starting the whole OAuth 2 madness from the scratch not learning from all the problems OAuth 2 had when it was new...

> I remember that OAuth forced the user to include client secret in app's binary. When extracted, everyone could impersonate the app. If you don't understand the problem then don't downvote

This isn't correct. Native apps aren't capable of holding a secret. There are two patterns here. Some providers omit the secret for native apps. Other providers define the concept of a "public secret," a value that is "not a secret," but is put in the client_secret field - rotating this value disables old clients. Either model is fine and secure.

The problems you refer to were mostly just developer error. Developers registered their native apps as having confidential secrets, even though this was not the case, and then shipped those secrets in the app source code.

I've implemented it in mobile to authorize our app to communicate with our cloud, it does still use a webview to enter credentials, but there's a specific framework for doing authorisation with a webview on iOS