If you read any of these and could never see yourself taking such a position, just skip down to the next section.
Let's first go through all the ways in which a SMS-based second factor can be broken:
The list for TOTP is shorter, but all code-based second factors (where the user is asked to type a code into the same webpage as the password) can fail in scenarios where:
"Push-based 2FA" is another type of multi-factor authentication and is summarized well here:
Note that the most crucial issue with code-based 2FA is not addressed or resolved by moving to "push-based 2FA". Users are still required to manually parse and validate authentication information, and the user must use the result of this decision to validate or reject the 2FA attempt.
Take another read through the article mentioned earlier where attackers were bypassing code-based 2FA at scale. Would it be that hard to run the framework on AWS/GCP/etc and make the man-in-the-middle server mimic the real user's User Agent string and (approximate) geographic location?
If most users AND datacenters tend to be located near large cities, how exact do you really need to be to get a user to write off the subtle difference as just another bug or software approximation?
It's important to be precise about exactly how the system is made "less secure" by using something like SoftU2F.
The Security Considerations section of the Github project puts it well:
That said...
When storing keys in the SEP, SoftU2F creates the key data and stores it in an object created with a call to SecAccessControlCreateWithFlags. The protection class that get passed in is kSecAttrAccessibleWhenUnlocked and the flags are BOTH .privateKeyUsage and .touchIDAny, which correspond to:
No. While U2F has been observed to work extremely well in very high risk environments, here are some specific scenarios where U2F or SoftU2F could be bypassed:
"Using a code-based authenticator (SMS, a TOTP-based phone app like Google Authenticator, etc) is just as secure as U2F/SoftU2F."
- The user is phished and submits a valid code to the attacker's phishing page.
- Something happens that lets an attacker receive SMS messages on behalf of the user
- SIM swapping
- SMS re-routing through SS7
- An attacker abuses a carrier's SMS forwarding functionality
- A compromise of...
- ...the company that runs the service's SMS API (ex: Twilio)
- ...any telecom infra that can access the text of SMS messages
- ...the cell phone that the SMS messages are sent to
- ...the service itself (ex: can an attacker silently add their own phone number as a new second factor device?)
The list for TOTP is shorter, but all code-based second factors (where the user is asked to type a code into the same webpage as the password) can fail in scenarios where:
- The user is phished and submits a valid code to the attacker's phishing page
- The secret TOTP seed is compromised...
- ...from the backend server that's doing the validation
- ...from the end user's cell phone's local storage
- A compromise of the service itself (ex: can an attacker silently add their own TOTP app as a new second factor device?)
Note that there are frameworks that can handle the "hard work" to make #1 from both lists possible at scale. This has already been observed to happen in the real world.
We will dive into exactly what scenarios can lead to U2F being bypassed later on, but the list will be strictly a subset of both of the lists above.
We will dive into exactly what scenarios can lead to U2F being bypassed later on, but the list will be strictly a subset of both of the lists above.
"A push-based two-factor authentication application (ex: Duo Push, Apple's Trusted Devices, etc) is just as secure as U2F/SoftU2F."
"Push-based 2FA" is another type of multi-factor authentication and is summarized well here:
This style of 2FA improves on authenticator apps in two ways: Acknowledging the prompt is slightly more convenient than typing in a code, and it is somewhat more resistant to phishing. With SMS and authenticator apps, a phishing site can simply ask for your code in addition to your password, and pass that code along to the legitimate site when logging in as you.
Because push-based 2FA generally displays an estimated location based on the IP address from which a login was originated, and most phishing attacks don’t happen to be operated from the same IP address ranges as their victims, you may be able to spot a phishing attack in progress by noticing that the estimated location differs from your actual location.
However, this requires that you pay close attention to a subtle security indicator. And since location is only estimated, it’s tempting to ignore any anomalies. So the additional phishing protection provided by push-based 2FA is limited.
Note that the most crucial issue with code-based 2FA is not addressed or resolved by moving to "push-based 2FA". Users are still required to manually parse and validate authentication information, and the user must use the result of this decision to validate or reject the 2FA attempt.
Take another read through the article mentioned earlier where attackers were bypassing code-based 2FA at scale. Would it be that hard to run the framework on AWS/GCP/etc and make the man-in-the-middle server mimic the real user's User Agent string and (approximate) geographic location?
If most users AND datacenters tend to be located near large cities, how exact do you really need to be to get a user to write off the subtle difference as just another bug or software approximation?
"Storing the keys of your multi-factor authentication device on the machine where they're used makes the system less secure. SoftU2F is strictly worse than a U2F authenticator that relies on an external device based on separate hardware (ex: Yubikey)."
The Security Considerations section of the Github project puts it well:
There is an argument to be made that it is more secure to store keys in hardware since malware running on your computer can access the contents of your Keychain but cannot export the contents of a hardware authenticator. On the other hand, malware can also access your browser's cookies and has full access to all authenticated website sessions, regardless of where U2F keys are stored.If the goal of 2FA is to prevent an attacker from being able to successfully masquerade as a user to the server, the game is already over if malware or a browser exploit is able to read data off your filesystem.
That said...
In the case of malware installed on your computer, one meaningful difference between hardware and software key storage for U2F is the duration of the compromise. With hardware key storage, you are only compromised while the malware is running on your computer. With software key storage, you could continue to be compromised, even after the malware has been removed.I would argue this distinction is well beyond most organizations' threat models, but the counter-point to this is that you can also have SoftU2F store the U2F keys in the Secure Enclave Processor present in most modern macbooks. This makes the storage generally equivalent to a "separate hardware device" as it will no longer be possible for an attacker to read and export the raw key material.
When storing keys in the SEP, SoftU2F creates the key data and stores it in an object created with a call to SecAccessControlCreateWithFlags. The protection class that get passed in is kSecAttrAccessibleWhenUnlocked and the flags are BOTH .privateKeyUsage and .touchIDAny, which correspond to:
@constant kSecAttrAccessibleWhenUnlocked Item data can only be accessed while the device is unlocked. This is recommended for items that only need be accesible while the application is in the foreground. Items with this attribute will migrate to a new device when using encrypted backups.from SecItem.h, and:
kSecAccessControlPrivateKeyUsage - Create access control for private key operations (i.e. sign operation)
kSecAccessControlTouchIDAny - Constraint: Touch ID (any finger). Touch ID must be available and at least one finger must be enrolled. Item is still accessible by Touch ID even if fingers are added or removed.from SecAccessControl.h.
Are you saying that there are no conditions where U2F/SoftU2F can be broken?
1. Users can be phished and the U2F "second factor" signature can be man-in-the-middled with a valid (forged) TLS certificate for the site in question.
While the TLS Channel ID extension is the proposed solution to the full TLS MITM scenario, it is optional and not widely supported (ex: Google's reference U2F implementation does not implement it and has a "TODO: Deal with ChannelID" comment here.) An example project with an open GitHub issue to add TLS Channel ID Binding (oxAuth) is here.
That said, if the TLS Channel ID extension were to be used properly alongside U2F, an attacker with MITM capabilities AND a valid TLS certificate for the website would not be able to perform a downgrade attack.
The reasoning laid out in Section 6.1 of this paper explains why:
Later on in the same paper:
Note that this is still no worse than SMS/TOTP.
While the TLS Channel ID extension is the proposed solution to the full TLS MITM scenario, it is optional and not widely supported (ex: Google's reference U2F implementation does not implement it and has a "TODO: Deal with ChannelID" comment here.) An example project with an open GitHub issue to add TLS Channel ID Binding (oxAuth) is here.
That said, if the TLS Channel ID extension were to be used properly alongside U2F, an attacker with MITM capabilities AND a valid TLS certificate for the website would not be able to perform a downgrade attack.
The reasoning laid out in Section 6.1 of this paper explains why:
U2F can complement and leverage TLS Channel ID to prevent all forms of TLS MITM.
According to the specifications, when a client supports TLS Channel ID and performs a handshake with a server that does not support TLS Channel ID, the U2F messages contain a U2F-signed specific entry: cid_pubkey with a string value unused. If TLS Channel ID is signaled by both parties, the string value contains instead the TLS Channel ID as seen by the TLS client. The legitimate server that supports TLS Channel ID is expected to check the cid_pubkey value.
If this value is incorrectly signed, if it is different from the TLS Channel ID that is expected for this client, or if the connection does not use TLS Channel ID and the cid_pubkey values unused, a TLS MITM is detected.In practice, though, it's critical to note that it's unlikely ANY sites are really doing this extended TLS Channel ID validation in a way that's not vulnerable to downgrade attacks.
Later on in the same paper:
The second observation from our experiment was that we were able to successfully authenticate to our Google test account using U2F and Chrome or Chromium, while going through [a TLS proxy which downgraded the TLS connection to remove the TLS Channel ID extension].
This happened in spite of the browser support of TLS Channel ID and in spite of the U2F-signed support signal. We contacted Google and the FIDO Alliance to warn them about the success of this downgrade attack. They answered that Chrome support of TLS Channel ID is currently experimental and still buggy at times. Thus, even if Google websites could detect such downgrade attacks, they decided neither to enforce the use of TLS Channel ID nor to use it to protect against such TLS MITM.
They also mentioned that TLS Channel ID would prevent legitimate use of corporate TLS proxies and they were not ready to keep U2F users from accessing Google services when such proxies were in use.In order for a service's U2F implementation to truly block a MITM attacker who has access to a valid TLS certificate, it must also never be possible to authenticate to this service over an internet connection that goes through a corporate TLS MITM proxy.
Note that this is still no worse than SMS/TOTP.
2. Some types of browser bugs could be exploited and combined with a phishing attack to forge a U2F signature. For example if there's a bug in the way the browser determines the origin of the web page, the browser could (in theory) be tricked into sending the wrong origin to the U2F device and a fake login page could receive a valid signature.
Note that (a) this is no worse than SMS/TOTP and (b) this is an unavoidable consequence of the fact that the security of U2F relies on the security of the web browser itself.
3. If there's an implementation bug in the U2F device, it's possible a U2F signature could be forged.
Note that this is still no worse than a buggy SMS/TOTP implementation where, for example, TOTP seed values could be improperly generated.
4. If the service's backend or datastore is compromised, U2F could be bypassed by, for example, silently adding a new U2F token to the user's account.
No worse than SMS/TOTP as an attacker could do the same there too.
5. A SoftU2F exploit could be leveraged to gain code execution and compromise the user's machine.
This is absolutely worth considering!
Some factors that make SoftU2F more trustworthy to me are that TrailOfBits performed an audit on the original driver code (here) and that SoftU2F is backed by GitHub's bug bounty (see the README).
Note that (a) this is no worse than SMS/TOTP and (b) this is an unavoidable consequence of the fact that the security of U2F relies on the security of the web browser itself.
3. If there's an implementation bug in the U2F device, it's possible a U2F signature could be forged.
Note that this is still no worse than a buggy SMS/TOTP implementation where, for example, TOTP seed values could be improperly generated.
4. If the service's backend or datastore is compromised, U2F could be bypassed by, for example, silently adding a new U2F token to the user's account.
No worse than SMS/TOTP as an attacker could do the same there too.
5. A SoftU2F exploit could be leveraged to gain code execution and compromise the user's machine.
This is absolutely worth considering!
Some factors that make SoftU2F more trustworthy to me are that TrailOfBits performed an audit on the original driver code (here) and that SoftU2F is backed by GitHub's bug bounty (see the README).