If you listened to Waymo Managing Director John Krafcik's comments at the Frankfurt Auto Show, you may have included a subtle nuance on Tesla and other big names in the automatic driving room. By highlighting the depth of experience the alphabet company's unparalleled experience has shown and why its goals still master the challenges of Level 4 autonomy, it can be easy to feel that you've heard comments like Krafcik's before. But with the benefit of historical context, which I have derived from the research that went into my book LUDICROUS: The Unvarnished Story of Tesla Motors, we can glean some important lessons from Krafcik's speech.
At that time, no one knew how close Tesla was to becoming an alphabet company himself. It wasn't until Ashlee Vance's type-approved biography of Elon Musk came out in 201
Part of Musk's advertising flash, starting in the second quarter of 2013, involved talking about automated driving for the first time. Musk began by saying Tesla might be using Google's technology to drive his cars, but in the second half, he talked about Tesla's own system free of the search giant's effort, inspirational headlines that had his company "[moving] over Google." To achieve this, Musk said Tesla's system would offer automated driving for "90% of the miles driven within three years," and said full autonomy was "a bridge too far."
With the benefit of thoughtfulness, it's clear that Musk was – at a minimum – either inspired or scared in this direction by his look behind the curtain on Google's surprisingly advanced autonomous technology. But based on the latest information from Krafcik, Musk seems to have been more than just inspired: Google had extensively tested a highway-only "driver in the loop" system before the point called "AutoPilot." According to Google / Waymo consultant Larry Burn's book Autonomy, "AutoPilot" [Burns doesn’t use this name] developed through 2011, tested it in 2012 and decided at the end of the year that it would not operate the product.
In short, Musk seems to have had a look at (or maybe even a demonstration of) AutoPilot and decided that if Google didn't take it to market, he would, right down to their internal name for the product. While not everyone would make the same decision with regard to a friend's product, especially after that company made an attractive offer of rescue services for their own company, it is not difficult to understand why Musk did what he did. In trend-obsessed Silicon Valley, automated driving was about turning Tesla's electric vehicle technology into old news and here was a completely scoped and demonstrated product that could get Tesla back in the game and that would otherwise be "surrendering."  The problem, of course, was simply that Google abandoned AutoPilot for good reason. The video of test drivers using AutoPilot, which Krafcik showed publicly for the first time in Frankfurt, shows that drivers become deeply inattentive, put on makeup, connect phones and even fall asleep. The leaders of Google's self-driving operation rightly realized that partial automation created a thorny problem between humans and machines that was almost more difficult to fully manage than independent Level 4 technology. Without unbelievable amount of work with driver monitoring, domain boundaries for operational design and other HMI work, AutoPilot was an irresponsible product to empower the public … and one that didn't even provide the key benefits of autonomy.
hard to imagine that Musk learned from AutoPilot in the first quarter of 2013 without learning Google's reasons for abandoning the product, but if he learned about these risks he played stupidly about them ever since. However, he played down the challenges of Google's new directions but told the media about the "incredible" challenges presented by "the last percentage" of miles driven and that Google's lid technology was "too expensive". Ever since then, Musk has regularly suffered from a whipping post focusing the public's attention on the challenge of Level 4 autonomy and away from the key issues with Tesla's Autopilot strategy.
Over the years since 2013, Waymo has quietly and surely made steady iterative progress on its Level 4 technology without breaking into the consumer mass market. Tesla, on the other hand, has collected billions in market valuation and established itself as a household consumer brand with the strength of an Autopilot system that has now been implicated in many crashes and deaths. The very scenario that Google's management feared, a fatal crash with an inattentive AutoPilot user, has now happened several times … and yet rather than destroying confidence in the broader technology, it has in no way even damaged Tesla's perceived position as a leader in automated driving.
On the one hand, this seems like a validation of Musk's notoriously ruthless and risk-tolerant approach to entrepreneurship (at PayPal he once gave away credit cards to basically anyone who wanted one). On the other hand, Musk's decision to either ignore or dismiss Google's concerns, despite its outstanding research and knowledge of the subject, throws subsequent Autopilot deaths under precisely the circumstances that Google worried about in a worrying light. After all, Tesla's own engineers shared these concerns and pushed Musk to adopt driver surveillance, which Musk dismissed because of either costs or the inability to make the technology work.
At some point, it becomes impossible to deny that Musk could have predicted the deaths of Gao Yaning, Josh Brown, Walter Huang, Jeremy Banner and possibly others (not to mention the countless non-lethal Autopilot crashes). They are forced to conclude that he risked these crashes because the benefits outperformed them, and no doubt the subsequent hype, headlines and share value that went to Tesla and Musk were worth billions. The public is upset by the possibility of car manufacturers making recall decisions by weighing the cost of a few cents per share against the inevitability of a certain number of human deaths, a trope that became popular in Fight Club and proven in scandals such as Ford Pinto, GM ignition switches and Takata airbag defects, and yet Musk's cold-blooded calculus has not yet become a public morale.
This is yet another example, along with Anthony Levandowski, of a certain amoral and self-enriching attitude that is perplexingly well tolerated in Silicon Valley. Waymo is constantly saddened or sued for its inability to distribute its own Level 4 robotic axes in a viable business, but criticizes Tesla's decision to deploy Autopilot without the safeguards that Google's testing showed it needed to be secure and resulted in several deaths itself himself as the domain of anti-Tesla "haters" and cooks. Certainly we can now see, when NTSB stacks up case after case of "predictable abuse" by Autopilot, to reward Musk's willingness to sacrifice human life for its own aggrandizement and enrichment is to create a set of incentives that lead directly to dystopia.  Of course, there are reasons why Musk's amoral gambit has not seen what it is. Despite years of academic research backing up Google's research, Autopilot's (and AutoPilot's) human in-the-loop nature allows people to blame, though all this research will always lead these systems to inattention (especially if there are one or two slightly discredited ones) studies from major institutions that show the opposite. Even the US Security Administration, NHTSA, is not equipped to establish anything like "predictable abuse" (which differs greatly from the types of deficiencies it is used to hunt) that require NTSB builds up a wealth of evidence before acting, and even Tesla's opaque data management system makes it more difficult for Tesla's owners, their loved ones, the media and regulatory authorities to determine that the problems identified by Google and countless academic researchers are really killing people.
Since so many participants in the public "debate" about security public relations Problems with Tesla's Autopilot have a financial interest in the company's stock or even just enjoy using the system (or even just like other aspects of the Tesla brand), there will always be someone defending Tesla. But the more important discussion here goes beyond Tesla itself: if a major car manufacturer decided that one system was not secure and another implemented it anyway, would anyone call the latter company a brave innovator even if people died because of their decision? What if they were aircraft manufacturers?
Whatever one thinks specifically about Elon Musk or Waymo or any individual, company or sector, what they do and how it is received creates incentives that the rest of us must live with. Letting Musk's autopilot decision slip creates a deeply troubling precedent that will in turn motivate someone else's decision to put your life in danger for the greater good of their honor. By ignoring facts promulgated by academic researchers, Waymo and NTSB in turn contribute to the erosion of fact and science-based discourse.
Although you think the Tesla drivers who have died made a conscious choice (and Tesla did not really disclose the research that shows "predictable abuse" of "Level 2+" systems is anything but inevitable), cars and drivers jeopardize many people on the road who did not. Elon Musk, on the other hand, made a conscious choice to distribute a system that he knew had problems with life or death and did not deactivate it or withdraw it from the market even after people began to die.
Beyond that, Wayma's slow (sometimes seemingly unmatched!) March towards truly driverless technology should be celebrated. They may not live up to the toxic expectations of Silicon Valley hype culture, but they live up to the most basic common norms in human society. If that means we have to wait a little longer to feel that we are living in an epic future, so be it. At least when that future comes, it will have a chance to be more utopian than dystopian.