Artificial intelligence is misreading human emotion

Artificial intelligence is misreading human emotion

Kate Crawford writes:

At a remote outpost in the mountainous highlands of Papua New Guinea, a young American psychologist named Paul Ekman arrived with a collection of flash cards and a new theory. It was 1967, and Ekman had heard that the Fore people of Okapa were so isolated from the wider world that they would be his ideal test subjects.

Like Western researchers before him, Ekman had come to Papua New Guinea to extract data from the indigenous community. He was gathering evidence to bolster a controversial hypothesis: that all humans exhibit a small number of universal emotions, or affects, that are innate and the same all over the world. For more than half a century, this claim has remained contentious, disputed among psychologists, anthropologists, and technologists. Nonetheless, it became a seed for a growing market that will be worth an estimated $56 billion by 2024. This is the story of how affect recognition came to be part of the artificial-intelligence industry, and the problems that presents.

When Ekman arrived in the tropics of Okapa, he ran experiments to assess how the Fore recognized emotions. Because the Fore had minimal contact with Westerners and mass media, Ekman had theorized that their recognition and display of core expressions would prove that such expressions were universal. His method was simple. He would show them flash cards of facial expressions and see if they described the emotion as he did. In Ekman’s own words, “All I was doing was showing funny pictures.” But Ekman had no training in Fore history, language, culture, or politics. His attempts to conduct his flash-card experiments using translators floundered; he and his subjects were exhausted by the process, which he described as like pulling teeth. Ekman left Papua New Guinea, frustrated by his first attempt at cross-cultural research on emotional expression. But this would be just the beginning.

Today affect-recognition tools can be found in national-security systems and at airports, in education and hiring start-ups, in software that purports to detect psychiatric illness and policing programs that claim to predict violence. The claim that a person’s interior state can be accurately assessed by analyzing that person’s face is premised on shaky evidence. A 2019 systematic review of the scientific literature on inferring emotions from facial movements, led by the psychologist and neuroscientist Lisa Feldman Barrett, found there is no reliable evidence that you can accurately predict someone’s emotional state in this manner. “It is not possible to confidently infer happiness from a smile, anger from a scowl, or sadness from a frown, as much of current technology tries to do when applying what are mistakenly believed to be the scientific facts,” the study concludes. So why has the idea that there is a small set of universal emotions, readily interpreted from a person’s face, become so accepted in the AI field?

To understand that requires tracing the complex history and incentives behind how these ideas developed, long before AI emotion-detection tools were built into the infrastructure of everyday life. [Continue reading…]

Comments are closed.