If AI becomes conscious, how will we know?
In 2021, Google engineer Blake Lemoine made headlines—and got himself fired—when he claimed that LaMDA, the chatbot he’d been testing, was sentient. Artificial intelligence (AI) systems, especially so-called large language models such as LaMDA and ChatGPT, can certainly seem conscious. But they’re trained on vast amounts of text to imitate human responses. So how can we really know?
Now, a group of 19 computer scientists, neuroscientists, and philosophers has come up with an approach: not a single definitive test, but a lengthy checklist of attributes that, together, could suggest but not prove an AI is conscious. In a 120-page discussion paper posted as a preprint this week, the researchers draw on theories of human consciousness to propose 14 criteria, and then apply them to existing AI architectures, including the type of model that powers ChatGPT.
None is likely to be conscious, they conclude. But the work offers a framework for evaluating increasingly humanlike AIs, says co-author Robert Long of the San Francisco–based nonprofit Center for AI Safety. “We’re introducing a systematic methodology previously lacking.”
Adeel Razi, a computational neuroscientist at Monash University and a fellow at the Canadian Institute for Advanced Research (CIFAR) who was not involved in the new paper, says that is a valuable step. “We’re all starting the discussion rather than coming up with answers.”
Until recently, machine consciousness was the stuff of science fiction movies such as Ex Machina. “When Blake Lemoine was fired from Google after being convinced by LaMDA, that marked a change,” Long says. “If AIs can give the impression of consciousness, that makes it an urgent priority for scientists and philosophers to weigh in.” Long and philosopher Patrick Butlin of the University of Oxford’s Future of Humanity Institute organized two workshops on how to test for sentience in AI. [Continue reading…]