Will AI avoid the ‘enshittification’ trap?

Will AI avoid the ‘enshittification’ trap?

Steven Levy writes:

As companies like OpenAI get more powerful, and as they try to pay back their investors, will AI be prone to the erosion of value that seems endemic to the tech apps we use today?

Writer and tech critic Cory Doctorow calls that erosion “enshittification.” His premise is that platforms like Google, Amazon, Facebook, and TikTok start out aiming to please users, but once the companies vanquish competitors, they intentionally become less useful to reap bigger profits. After WIRED republished Doctorow’s pioneering 2022 essay about the phenomenon, the term entered the vernacular, mainly because people recognized that it was totally on the mark. Enshittification was chosen as the American Dialect Society’s 2023 Word of the Year. The concept has been cited so often that it transcends its profanity, appearing in venues that normally would hold their noses at such a word. Doctorow just published an eponymous book on the subject; the cover image is the emoji for … guess what.

If chatbots and AI agents become enshittified, it could be worse than Google Search becoming less useful, Amazon results getting plagued with ads, and even Facebook showing less social content in favor of anger-generating clickbait.

AI is on a trajectory to be a constant companion, giving one-shot answers to many of our requests. People already rely on it to help interpret current events and get advice on all sorts of buying choices—and even life choices. Because of the massive costs of creating a full-blown AI model, it’s fair to assume that only a few companies will dominate the field. All of them plan to spend hundreds of billions of dollars over the next few years to improve their models and get them into the hands of as many people as possible. Right now, I’d say AI is in what Doctorow calls the “good to the users” stage. But the pressure to make back the massive capital investments will be tremendous—especially for companies whose user base is locked in. Those conditions, as Doctorow writes, allow companies to abuse their users and business customers “to claw back all the value for themselves.”

When one imagines the enshittification of AI, the first thing that comes to mind is advertising. The nightmare is that AI models will make recommendations based on which companies have paid for placement. That’s not happening now, but AI firms are actively exploring the ad space. In a recent interview, OpenAI CEO Sam Altman said, “I believe there probably is some cool ad product we can do that is a net win to the user and a sort of positive to our relationship with the user.” Meanwhile, OpenAI just announced a deal with Walmart so the retailer’s customers can shop inside the ChatGPT app. Can’t imagine a conflict there! The AI search platform Perplexity has a program where sponsored results appear in clearly labeled follow-ups. But, it promises, “these ads will not change our commitment to maintaining a trusted service that provides you with direct, unbiased answers to your questions.”

Will those boundaries hold? Perplexity spokesperson Jesse Dwyer tells me, “For us, the number one guarantee is that we won’t let it.” And at OpenAI’a recent developer’s day, Altman said that the company is “hyper aware of the need to be very careful” about serving its users rather than serving itself. The Doctorow doctrine doesn’t put much credence in statements like that: “Once a company can enshittify its products, it will face the perennial temptation to enshittify its products,” he writes in his book. [Continue reading…]

Comments are closed.