Non-consensual AI is taking over
If you’re looking to understand the philosophy that underpins Silicon Valley’s latest gold rush, look no further than OpenAI’s Scarlett Johansson debacle. The story, according to Johansson’s lawyers, goes like this: About nine months ago, OpenAI CEO Sam Altman approached the actor with a request to license her voice for a new conversation feature in ChatGPT; Johansson declined. She alleges that just two days before the company’s keynote event last week—in which that feature, a version of which launched last September, was highlighted as part of a new system called GPT-4o—Altman reached out to Johansson’s team, urging the actor to reconsider. Johansson and Altman allegedly never spoke, and Johansson allegedly never granted OpenAI permission to use her voice. Nevertheless, the company debuted GPT-4o two days later—drawing attention to the “Sky” voice, which many believed was alarmingly similar to Johansson’s.
Johansson told NPR that she was “shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine.” In response, Altman issued a statement denying that the company had cloned her voice and saying that it had already cast a different voice actor before reaching out to Johansson. (I’d encourage you to listen for yourself.) Curiously, Altman said that OpenAI would take down Sky’s voice from its platform “out of respect” for Johansson. This is a messy situation for OpenAI, complicated by Altman’s own social-media posts. On the day that OpenAI announced GPT-4o, Altman posted a cheeky, one-word statement on X: “Her”—a reference to the 2013 film of the same name, in which Johansson is the voice of an AI assistant that a man falls in love with. Altman’s post is reasonably damning, implying that Altman was aware, even proud, of the similarities between Sky’s voice and Johansson’s.
On its own, this seems to be yet another example of a tech company blowing past ethical concerns and operating with impunity. But the situation is also a tidy microcosm of the raw deal at the center of generative AI, a technology that is built off data scraped from the internet, generally without the consent of creators or copyright owners. Multiple artists and publishers, including The New York Times, have sued AI companies for this reason, but the tech firms remain unchastened, prevaricating when asked point-blank about the provenance of their training data. At the core of these deflections is an implication: The hypothetical superintelligence they are building is too big, too world-changing, too important for prosaic concerns such as copyright and attribution. The Johansson scandal is merely a reminder of AI’s manifest-destiny philosophy: This is happening, whether you like it or not. [Continue reading…]