How AGI has become the most consequential conspiracy theory of our time

How AGI has become the most consequential conspiracy theory of our time

Will Douglas Heaven writes:

Are you feeling it?

I hear it’s close: two years, five years—maybe next year! And I hear it’s going to change everything: it will cure disease, save the planet, and usher in an age of abundance. It will solve our biggest problems in ways we cannot yet imagine. It will redefine what it means to be human.

Wait—what if that’s all too good to be true? Because I also hear it will bring on the apocalypse and kill us all …

Either way, and whatever your timeline, something big is about to happen.

We could be talking about the Second Coming. Or the day when Heaven’s Gaters imagined they’d be picked up by a UFO and transformed into enlightened aliens. Or the moment when Donald Trump finally decides to deliver the storm that Q promised. But no. We’re of course talking about artificial general intelligence, or AGI—that hypothetical near-future technology that (I hear) will be able to do pretty much whatever a human brain can do.

For many, AGI is more than just a technology. In tech hubs like Silicon Valley, it’s talked about in mystical terms. Ilya Sutskever, cofounder and former chief scientist at OpenAI, is said to have led chants of “Feel the AGI!” at team meetings. And he feels it more than most: In 2024, he left OpenAI, whose stated mission is to ensure that AGI benefits all of humanity, to cofound Safe Superintelligence, a startup dedicated to figuring out how to avoid a so-called rogue AGI (or control it when it comes). Superintelligence is the hot new flavor—AGI but better!—introduced as talk of AGI becomes commonplace.

Sutskever also exemplifies the mixed-up motivations at play among many self-anointed AGI evangelists. He has spent his career building the foundations for a future technology that he now finds terrifying. “It’s going to be monumental, earth-shattering—there will be a before and an after,” he told me a few months before he quit OpenAI. When I asked him why he had redirected his efforts into reining that technology in, he said: “I’m doing it for my own self-interest. It’s obviously important that any superintelligence anyone builds does not go rogue. Obviously.”

He’s far from alone in his grandiose, even apocalyptic, thinking. [Continue reading…]

Comments are closed.