Home / Health / Artificial intelligence on the wrong track – Sunday newspaper

Artificial intelligence on the wrong track – Sunday newspaper



Of course, a gay man! Just look at the narrow chin, the long nose and the high forehead. And lesbians first of all, of course you can tell their wide chin and a small forehead. An AI system came to this conclusion last year, which analyzed thousands of images of people of talented homosexuals and searched for allegedly typical facial features.

The news caused a feeling. Should physiognomy be confirmed in this way – the pseudo science that pretends to be able to read human qualities on the face? During the last century it formed the foundation for racism and eugenics. What happened?

The Scientist Draws False Conclusions

Psychologist Michal Kosinski of Stanford University, actually a famous researcher, published the study in the fall of 201

7. He had together with colleagues 35,000 photos a dating platform with self-awareness about sexual orientation was used to train an image recognition algorithm. The new methods of machine learning are particularly good for finding patterns in data.

And they found that the system: The criteria for human sexual orientation were according to Kosinski "female facial expressions" in male faces (gay), small facial hair (gay), darker skin (heterosexual). 81 percent of all gay men have recognized the system correctly, Kosinski reported and 74 percent of all lesbian women. The researcher concluded that some genes simultaneously affect a person's sexual identity and appearance. How else can an artificial intelligence (AI) read sexual orientation on the face? But that was a gross mistake.

Fashion makes the difference

If you wanted to bring an example that shows the dangers of machine learning and above all the misunderstandings between man and machine, you could not be better than this study. After all, the new pattern recognition approaches determine the criteria they classify people or things. Sometimes these temporary correlations – and people think about making the mistake of suspending causes like Kosinski. Therefore, other researchers repeatedly complain that they can not "look in their minds" on such systems – never knowing what features AI considers relevant.

Among the few suspected in the context of the fast-returning Physiognomics, belongs to Alexander Todorov, Head of Social Perception Lab at Princeton University. Unlike Kosinski, he does not work with algorithms but with people. He investigates how stereotypes and prejudices arise. "Even people are better at it than the chance to recognize gay or lesbian by their appearance," he says. But it's not because hormones in gays lead to a lighter skin or lesbians have male characteristics. This has to do with social stereotypes.

This is suggested by a study Todorov co-author Margaret Mitchell and Blaise Agüera y Arcas, two machine learning experts at Google Research. They interviewed 8000 freelancers on the Internet about their sexual orientation and their fashionable preferences. Among other things, it became clear that heterosexual women make up much more often than lesbians. Homosexuals usually wear glasses, while hot men tend to avoid glasses and put on contact lenses. These are all features that allow AI to distinguish gay and lesbian. "But this has nothing to do with hormones, as Kosinski suggested," explains Todorov, but with some fashion and circumstances in different social groups.

End of Modern Physiognomy

Another weak point is Selfies: How people photograph themselves are also subject to fashion. Straight men photograph themselves from below (they look bigger), heterosexual from above (probably beautiful big eyes), while gays and lesbians often take themselves straight from the front. And it is precisely this perspective that shifts the proportions of facial expressions in the direction that Kosinski and his AI have assumed to be typically gay or lesbian. Homosexual men would have narrower jaws, longer noses and larger pans while lesbian faces would have bigger jaws, shorter noses, smaller pans have.

This is the end of modern physiognomy: Kosinski systems reliably recognize homosexuals with secondary, non-biological properties. The conclusion is wrong that genetics put our sexual orientation in our faces.

The statistics on studies such as Kosinski must also be properly classified. For example, 80 percent recognition degrees are not so spectacular anymore when you know that a randomly selected photo from the gay group and a random one from the heterosexual group were selected for the experiment. The software had to recognize one of the pictures, the other was the result of it. So if you decide to decide, this would already achieve a degree of recognition of 50 percent. Then 80 percent is not that impressive.

Algorithms Determine According to Unexpected Criteria

The same goes for a study by Chinese researchers who claimed to detect crime on their faces. They had fed the software with almost 2000 images, of which half were sentenced insults. A neural network now recognized the criminals with 89.5 percent accuracy. Only the images of the sentenced came from a different database than the innocent. "The judges had all the T-shirts," says Todorov. "If you feed it with an AI, it obviously understands the criminals: on the T-shirt." But not on the face. "Such a system will not make you smarter, but stupid."

While the Chinese researchers and Kosinski sell their results as a success, other researchers want to better understand the mechanisms of the algorithms and find out which criteria neural networks order images. It is nevertheless problematic for autonomous cars, for example, if you do not know which characteristics of the road that AI considers in the wheel is important. And sometimes it comes down to unexpected details.

So an algorithm trained to recognize horse pictures was very good at sorting. However, researchers around Wojciech Samek from the Fraunhofer Heinrich Hertz Institute in Berlin showed that it was not based on specific horse characteristics, but only on the copyright statement at the edge of the images. This community was not noted by the researchers.

Traffic Symbols Are No Problems

The research team has therefore developed a method by which the neural networks decision can be understood. They run a back-image image network and can see at what time a group of neurons have made what decision and weight it got for the end result. For example, they could show that the software was based on trains on the tracks and on the platform edge – the train itself did not consider the network to be particularly important. It would therefore probably also take on a picture that there is a train where only rails and a platform can be seen.

Other researchers are working on less incorrect image recognition methods. Marc Tschentscher of the University of Bochum, for example, develops software for autonomous vehicles, which obviously has to recognize traffic signs without a doubt. For the education of his algorithms, he himself has given certain characteristics that the network is based mainly on, for example, the color red when it came to a stop sign. This leads to very reliable results, especially as there is a manageable number of motifs in Schilderwald. Even the rain drops on the windscreen, a changing position on the sun or a windscreen wiper in the picture do not confuse the systems here. "Traffic signs are considered resolved in image recognition," he says. After all that.

(SonntagsZeitung)

Created: 29/04/2018, 11:07


Source link