Those were the words of advice offered by philosopher, historian, and best-selling author Yuval Noah Harari last week during an on-stage conversation at Stanford University. The prolific writer (and agitator) has long been a critic of artificial intelligence applications that track, aggregate, and learn from our every move, gleaning insights about us that we are sometimes oblivious to ourselves.
"Get to know yourself better, "Harari said again in more modern English," because now, you've got competition. "Stanford's Human-Centered AI Institute, which aims to develop technologies to benefit humanity, co-sponsored the event. The conversation also featured computer science professor Fei-Fei Li, a pioneer in AI research and co-director of the multidisciplinary institute. Both speakers focused on what the future holds for AI, and how it can be used to support rather than subvert human interests. Not surprisingly, Harari and Li did not see eye to eye on the best path forward or on the scope and severity of the harm AI can unleash.
One of Li's suggestions, for examples, was to develop AI systems that can explain their processes and decisions. But Harari argued that these technologies have become too complex to explain, and that this level of complexity can undermine our autonomy and authority.
While the conversation was mostly fruitful and productive, there were a few friendly jabs.
I'm very envious of philosophers, because they can propose questions and crises, but they don't have to answer them, ”said Li. (Likewise, chuckled at that one.)
Perhaps, Harari's laughter came from knowing that he was about to offer some solutions, though even his very simplest takeaway— “know thyself” —is easier said than done. The challenge of knowing ourselves better than AI systems is best illustrated with an anecdote shared by the philosopher himself. Harari told the audience that he didn't realize he was gay until he was 21
"What does it mean to live in a world where you can learn something so important about yourself from an algorithm? ”he asked the audience. “And what if that algorithm doesn't share [this information] with you but with others — advertisers or an authoritarian regime?”
The risks of AI knowing too much be addressed — both by outside critics like Harari and increasingly, by engineers, educators, and other insiders like Li. When ai systems know too little about us, or about entire demographics?
Also last week, I attended Women Transforming Technology, an event that took place at the Palo Alto campus of technology company VMware. There, Joy Buolamwini, a researcher at the Massachusetts Institute of Technology's Media Lab, discussed problems of bias in AI applications. Much of Buolamini's work has centered on the inability of facial recognition systems to accurately identify the faces of women and, to a much greater extent, people of color. If you can probably find these systems tend to have the hardest time recognizing the faces of women of color.
"These are the under-sampled majority of the world — women and people of color," Buolamwini customs her audience
The bias in many facial recognition applications starts with the data sets used to train these AI systems. According to Buolamwini, the majority of the pictures are included in these self-learning systems. The benchmarks used to assess these systems, are also optimized for male, white faces. This has fixed and potentially dangerous implications: Just imagine a self-driving vehicle that detects someone with dark skin as accurately as it can "see" someone with light skin.
It is these types of risks that led Buolamwini to starts the Algorithmic Justice League, an organization aimed at highlighting and alleviating bias from AI systems. The "collective," as the M.I.T. researcher calls it, brings together coders, activists, regulators, and others to work together to raise awareness of these important technological and societal issues.
Buolamini's work has probably led to improvements. During her talk she pointed to recent increases in the non-white and non-male subjects' accuracy of facial recognition from IBM, Facebook and other companies. But here's the rub: While buolamwini is clearly pushing for more improvements in these systems, she is also worried about the applications of facial recognition technologies that know enough about all people. 19659002] "You can have accurate facial recognition and put it on some drones, but it might not be the world you want to live in," Buolamwini told me during a sit-down interview after her talk. : If a system is biased and it's being deployed for law enforcement purposes, you can't justify using that system. Now, let's say you've fixed that bias. Then, the question becomes, in Buolamwini's words, "Do we want to live in a mass surveillance state?"
That is one question I'm pretty sure that Buolamwini, Harari, and would answer the same way: No.