Apple shook the log world of the last week and offered a new one-time tool (or SSO) aimed at gathering and sharing as little data as possible. It was a deliberate shot on Facebook and Google, currently running the two major SSO services. But while Google was not happy with the crafted confidentiality tab, the company's login manager is surprisingly sunny about having a new button to compete with. While the login buttons are relatively simple, they are much more resistant to common attacks such as phishing, making them much stronger than the average password, provided you trust the network that offers them.
As Google expands its own Android two-foot system, I spoke to product manager Mark Risher about why Apple's new login button may not be as daunting as it seems.
It is difficult to put a finger on the advantage of all these different login tools, but it feels like things are getting better? In my personal experience, I am not asked about a password almost as often as I was five years ago.
Right and it is much better. Typically, with passwords, the capital letters and symbols are recommended, and all that the majority of the planet considers to be the best they should do to improve security. But it actually has no bearing on phishing, no impact on password violations, no impact on password reuse. We find it much more important to reduce the total number of passwords out there. Once you have started the federation accounts, it means that you may still have some passwords, but a new service that you just try does not need a 750 person security team focused on security. It does not need to build your own password database and handle all the debt and any risk that comes with it.
You also manage Google's SSO tools, which received some competition from Apple last week at WWDC. Part of the plan seemed to be that Apple's SSO system collects less data and respects privacy more. Do you feel that this is a fair criticism?
I blame that we haven't really formulated what happens when you press "Sign in with Google" button. Many people do not understand, and some competitors have drawn it in the wrong direction. Maybe you click on the button that it tells all your friends that you've just logged in to a little embarrassing. So getting someone out there to revive space and explain what it means and what's happening, it's really good.
But there was a lot of insinuation that was wrapped around the release which suggested that only one of them is clean and the rest of them are corrupt, and of course I do not like it. We just log out the authentication time. It is not used for any kind of redirection. It is not used for any kind of advertising. It is not distributed anywhere. And it's partly there for user control so they can go back and see what happened. We have a page, part of our security check, which says "Here are all connected apps and you can go and break that connection." This current product I have not seen how it should be built, but it sounds like they will also log in at that moment and then every mail that has ever been sent by that company, which sounds much more invasive. But we'll see how the details work.
I honestly believe that this technology will be better for the internet and will make people a lot, much safer. Although they click on our competitors' buttons when they log on to websites, it is still better than entering a custom user name and password, or more often a recycled username and password.
The basic prerequisite for this type of login is that you can log in once to Google (or Apple or Facebook) and then extend that login to everything else. But is that model still meaningful? Why not have different security levels for different services instead of putting all our eggs in a basket?
Part of your premise I have high security and low security services. But the problem is that things do not stay in the low security bucket. We develop over time. When I first signed up for Facebook in 2006, I didn't have anything new there. Nowadays it is much more important. And how many people are going back and upgrading? It's pretty rare. The second problem is that we see lots of these lateral attacks, where someone does not go directly after your bank, they go after your friend or assistant and they use that account to send a message convincingly from them, ask for a transfer or Ask about the answer to your secret question, which they can then go and record to the site. The more of these accounts you leave loosely protected, the more vulnerable you are to it.
People often push back against the federated model and say that we put all our eggs in a basket. This rolls off the tongue, but I think it's the wrong metaphor. A better metaphor can be a bank. There are two ways to store your hundred dollars: you can spread it around the house, put one dollar in each box, and some under your mattress and all this. Or you can put it in a bank, which is a basket, but it is a basket protected by 12-inch thick steel doors. It seems like the better option!
You also encountered some security issues around Titan's security key last year. Some security experts were concerned that any key made in China was potentially vulnerable. How much do you worry about the supply chain interference?
It is definitely part of the threat model. It's something we designed all the way to the protocol. I think some of the answer to the Titan key was unnecessarily worried, for some reason. One is that, these worries had always been part of our thinking. So we said we don't trust people, no matter which country they are in. That's why the chip is sealed. The chip has a certificate that is available for it. The chip is not field upgradeable. In fact, we just did all these changes, because through design we cannot print code out there to change it. There were many reasons why I didn't think it was the real threat people would be worried about.
In recent years there has been a big shift in how people think of technical integrity – do not trust the companies less, but are also aware that all the different ways things can get bad when all tasks are open, become shared and combined on different ways. How did you answer that?
We have really gone through a paradigm shift. We used to say, it's your data, we just let you make a decision and then it's on you. Now we are much more perceived because our users ask us to be much more perceived. You can see the manifesto in the security check, which now actually gives you a personal set of recommendations based on your own designs. It used to say that you have 16 different units, like seeing if something was suspicious. And users said, "No, why don't you say what's suspicious?" So now we say "You have 16 units. These four we haven't seen in 90 days. Are you sure you didn't give it to a friend and forgot to sign out or, you know, sell it on eBay?" this sensitive balance: how do you do just the right amount, but also give them the kind of editorial level of protection they expect?
There is concern for Apple login that although it is a positive product, they are too heavy in forcing it on developers. You can say the same about many of the Google projects you're talking about. Are you worried about tying users too hard? F
I worry about it. That's the problem of cynicism. Cynicism is when people do not trust your motives. You say, "Here is a product that will keep you safer," and people say, "Hey, what are you gonna do with it?" I think this is an ecosystem problem. We have a competitor who gathered phone numbers as a security challenge, but then they also usually use them to build up a graph to advertise redirection. It is bad for the whole ecosystem because it does not make people trust us.
We try to set a very high bar. And we continue to look for places where we can refocus and revise our best practices and continue to raise that bar. But to some extent, it is a problem with the ecosystem. The worst behavior in the market is what everyone sees. And that was why some of the insinuation from Apple was a little annoying, from our point of view. Because we are really trying to adhere to a high standard.