AI’s ostensible emergent abilities are a mirage

AI’s ostensible emergent abilities are a mirage

Stanford University Human-Centered AI:

For a few years now, tech leaders have been touting AI’s supposed emergent abilities: the possibility that beyond a certain threshold of complexity, large language models (LLMs) are doing unpredictable things. If we can harness that capacity, AI might be able to solve some of humanity’s biggest problems, the story goes. But unpredictability is also scary: Could making a model bigger unleash a completely unpredictable and potentially malevolent actor into the world?

That concern is widely shared by many in the tech industry. Indeed, a recently publicized open letter signed by more than 1,000 tech leaders calls for a six-month pause on giant AI tech experiments as a way to step back from “the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”

But according to a new paper, we can perhaps put that particular concern about AI to bed, says lead author Rylan Schaeffer, a second-year graduate student in computer science at Stanford University. “With bigger models, you get better performance,” he says, “but we don’t have evidence to suggest that the whole is greater than the sum of its parts.”

Indeed, as he and his colleagues Brando Miranda, a Stanford PhD student, and Sanmi Koyejo, an assistant professor of computer science, show, the perception of AI’s emergent abilities is based on the metrics that have been used. “The mirage of emergent abilities only exists because of the programmers’ choice of metric,” Schaeffer says. “Once you investigate by changing the metrics, the mirage disappears.” [Continue reading…]

Comments are closed.