Is conscious AI possible?

Is conscious AI possible?

Andrew Maynard writes:

It seems that barely a week goes by these days where there isn’t some degree of speculation that the current crop of large language models, novel AI systems, or even the internet, are showing signs of consciousness.

These are usually met with a good dose of skepticism. But under the surface there’s often a surprisingly strong conflation between advanced artificial intelligence and conscious machines — to the extent that much of the current wave of AI acceleration being pushed by companies like OpenAI and Anthropic is underpinned by a belief that future human flourishing is depended on superintelligent machines that are likely to exhibit some form of conscious behavior.

Yet in a new paper, leading neuroscientist Anil Seth argues that consciousness may be a unique product of biology, and thus unreachable through current approaches to AI.

The arguments Seth makes are critically important to how advances in AI are framed and pursued, and to the responsible and ethical development of AI more broadly. They also call into question whether current efforts to achieve artificial general intelligence (AGI) and superintelligence are are, in fact, feasible — or even advisable.

Seth’s paper is as refreshing as it is necessary. In stark contrast to some of the hyperbolic and hubristic proclamations coming from proponents of advanced AI, Seth writes with clarity, authority, and — importantly — humility. He readily admits that his ideas may be wrong, and invites people to test and build on his thinking.

Yet his reasoning is compelling, and is grounded in a deep understanding of research and thinking on consciousness.

At the core of the paper is the possibility that consciousness is not simply a function of computational power, but that it is an emergent or embedded property of systems that are either biological, or are biological in nature.

If this is right — and Seth notes that there’s a high probability that it is, but not absolute certainty — it means that consciousness cannot arise simply from the software or algorithms that run within banks of GPUs or highly networked computers, or even on personal computers or mobile devices.

If true, this would place a rather large wrench in the ambitions of developers striving to create advanced AI simply by scaling compute capabilities using digital hardware. [Continue reading…]

Comments are closed.