Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does anyone here know what privacy/tracking issues are with this standard?


FIDO2 privacy is actually pretty good and well thought out. There's a theoretical risk of a website sending authentication challenges for two different accounts and having both assertions signed by the same credential, basically correlating those accounts together, but this is unlikely to weaken collective privacy at scale.


Kind of. Yubikeys intentionally have a very small number of devices signed with one CA key and then they produce a new CA key, so those devices do have a basically unique identifier.


Can you share any more information about that? Is this identifier shared as part of the FIDO2/U2F spec?


I think they're referring to attestation (https://fidoalliance.org/fido-technotes-the-truth-about-atte....), which requires that attestation certificates be shared with a minimum of 100,000 other devices in order to ensure they're not unique IDs.

Maybe the parent misread the spec as saying a _maximum_ of 100,000? Or something?


The point being, the FIDO Alliance reserves the right to blacklist any device that an attacker manages to extract the secret keys from, which has the consequence that 99,999 other people have their devices bricked.

Also, the Alliance could decide to blacklist a manufacturer just because they haven't implemented some new policy (like requiring a DNA scan of the user) so you better make sure that you buy a device from one of the "too big to fail" providers.


> The point being, the FIDO Alliance reserves the right to blacklist any device that an attacker manages to extract the secret keys from, which has the consequence that 99,999 other people have their devices bricked.

1. By what mechanism can they blacklist a device? A given relying party can choose to use or not use attestation and, if they choose to use it, which certificates to trust. But that's between you and the RP. Authentication doesn't "talk to" the FIDO Alliance--which is just a standards body and does not (AFAIK) even publish anything like a CRL for "bad" attestation keys, so I don't understand what you are talking about here.

2. The intention of the attestation, as I understand it, is to enable RPs to use attestation to limit authenticators to e.g. those that pass FIPS certification (or similar enterprisey requirements), not to ban a whole batch because one key is known to be compromised. That's crazy; can you point out where anyone other than you has ever proposed this?

3. DNA scan? What are you talking about?

4. This assertion you are making, while bizarre and wrong, is very different than the assertion the grandparent made ("Yubikeys intentionally have a very small number of devices signed with one CA key...so those devices do have a basically unique identifier"), which, while also wrong, is I think a genuine mistake and not a bad-faith argument.


> A given relying party can choose to use or not use attestation and, if they choose to use it, which certificates to trust.

True, and a website could decide to issue its own certificates rather than get one from a CA trusted by browsers, but in practice (and potentially one day by law) most sites will defer to the FIDO Alliance to determine which devices are "sufficiently secure".

> the FIDO Alliance--which is just a standards body and does not (AFAIK) even publish anything like a CRL for "bad" attestation keys

"The FIDO Alliance Metadata Service (MDS) is a centralized repository of the Metadata Statement that is used by the relying parties to validate authenticator attestation and prove the genuineness of the device model."[0]

> That's crazy; can you point out where anyone other than you has ever proposed this?

"If the private ECDAA attestation key sk of an authenticator has been leaked, it can be revoked by adding its value to a RogueList."[1]

> DNA scan? What are you talking about?

I picked a deliberately extreme example to make the point that there are requirements for these devices that users might not be happy with (but might not have any choice about, once the capability becomes ubiquitous). That specific example may never come to pass, but I don't think we should assume that allowing RPs to put arbitrary conditions on the hardware we use is a power that won't be abused.

For added context: "FIDO will soon be launching a biometric certification program that ensures biometrics correctly verify users. Both certifications show up as metadata about the authenticator, providing more information to enable services to establish stronger trust in the authenticators.)"[2]

> This assertion you are making, while bizarre and wrong ... a bad-faith argument.

Maybe you should have assumed good faith.

[0] https://fidoalliance.org/metadata/

[1] https://fidoalliance.org/specs/fido-uaf-v1.2-rd-20171128/fid...

[2] https://fidoalliance.org/fido-technotes-the-truth-about-atte...


> True, and a website could decide to issue its own certificates rather than get one from a CA trusted by browsers…

That’s quite different. In your example, if a website does so unilaterally, client user agents break. In the FIDO case, nobody else knows or cares which authenticators an RP trusts.

More broadly, I don’t get this conspiracy theory. You’re worried…the FIDO alliance will abuse their very limited power to…what end?

> If the private ECDAA attestation key sk of an authenticator has been leaked, it can be revoked by adding its value to a RogueList."[1]

The attestation key, which is shared among all devices? That’s rather different from what you said.

> That specific example may never come to pass, but I don't think we should assume that allowing RPs to put arbitrary conditions on the hardware we use is a power that won't be abused.

RPs already have such power. Today they use it to do things like require password complexity policies. Again, RPs aren’t the FIDO alliance; they’re the actual website you’re logging into.

Your repeated argument here is that websites should not be allowed to impose restrictions on how their users authenticate, which is hard to fathom.

In a previous version of this argument, I remember you essentially arguing that banks and enterprises should not be able to restrict what types of authenticators their employees and customers use.

I get it. You hate attestation. But “my employees must use a fips-certified key” (or “my customers must use a hardware key”) is reasonable and ultimately non-negotiable if you want people to use your protocol.


> But “my employees must use a fips-certified key” (or “my customers must use a hardware key”) is reasonable and ultimately non-negotiable if you want people to use your protocol.

I think this is the crux of where our disagreement lies. I grudgingly accept that FIDO makes it easier for companies to check that their employees are storing their keys on company-approved devices, but I don't think that arbitrary websites should be given the power to make demands about the hardware that visitors must use to create accounts. That seems like a worse position for user freedom than we have today with passwords.

You might say that websites already have this power, in some convoluted way. They could say "Enter your credit card details and postal address here and we'll send you a custom device you can use to log in to our website", but in practice no company does that. (Banks and governments are maybe special cases, and less concerning given that: their authenticators are managed out of band; they are highly regulated; they usually have actual branches that you can go to in person to sort things out; and people generally choose to interact with banks/governments that are based in their own country).

Attestation changes the market dynamics here. Suddenly it becomes acceptable for sites to bully users into buying certain types of devices, and for governments to start demanding that these devices be used as online IDs (at least for age verification, to start with). Even if companies don't abuse this power to keep people in their ecosystem (e.g. Apple sites giving you special features if you log in with an Apple device), the first casualties are going to be open source hardware and software implementations, which will be deemed insecure, and further normalise the idea that users can't go online without running proprietary code.


Yet Google, one of the key participants in the FIDO alliance, has published an open source firmware!

I agree the potential exists, in a hypothetical sense. But the dynamics are very different than you describe (with your analogy to the CA ecosystem, which, ironically, gives big platform owners far more power—yet has no evidence of such abuse!).

Right now, there is just not that much use of WebAuthn and FIDO. You’re the guy saying, “if we find a way to lower global temperatures, we should fear an ice age.” It’s premature to say the least.


I'm glad Google has published an open source firmware, and I hope that people will be able to independently verify that the hardware they use is genuinely running that firmware. Then I hope that hardware with such guarantees is not discriminated against by RPs.

The important difference with the CA ecosystem is that (in the worst case) the big platform owners can put pressure on small websites to obtain a certificate from one of a large number of competing issuers. Significantly, these issuers are not the same as the big OS providers themselves, and there are issuers who issue certificates for free. That is completely the reverse of 3 big platforms forcing end users to buy hardware, and those platforms being hardware vendors themselves.

> You’re the guy saying, “if we find a way to lower global temperatures, we should fear an ice age.”

No, I'm the frog saying "Hey, isn't this water getting a bit warm? Don't you think we should jump out before it's too late?"


But the big three can't do that, with FIDO. All they can do is influence the FIDO Alliance to add other SK manufacturers to the pseudo-CRL, which:

- is transparent

- is mediated by the FIDO Alliance; the platform makers cannot do it unilaterally, as they can with CAs in browsers

- is mediated by the RPs; even if the FIDO Alliance did do this for some reason, RPs could just ignore it with no ill effects, unlike with CA trust in browsers

- wouldn't have any effect today for the vast majority of RPs, since the vast majority do not even use attestation today

- honestly, isn't something they have any incentive to do; hardware security keys are not a meaningful source of revenue for someone like Apple, Microsoft, or Google

I'm guessing you've never worked in a big tech company before if you think they have an incentive to do that. :)


What is collective privacy?

Also, why not allow uncertified devices to generate keys? Why do I have to implement secure boot or TPM?


I haven't found any so far. Each account gets a new public/private key pair so accounts can't be traced back to each other. Usernames are optional and might even become a thing or the past, making username reuse less of an issue for linking accounts.

It all depends on the sync method provided. If synchronisation isn't end-to-end protected, you're handing Apple/Google/Microsoft the keys to the kingdom which is pretty bad.


It fully resolves all existing privacy and tracking issues. You will have no privacy and will be tracked all the time. Problem solved.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: