The emerging teen-health crisis caused by AI
Kaitlyn Tiffany and Matteo Wong write:
On Tuesday afternoon, three parents sat in a row before the Senate Judiciary Subcommittee on Crime and Counterterrorism. Two of them had each recently lost a child to suicide; the third has a teenage son who, after cutting his arm in front of her and biting her, is undergoing residential treatment. All three blame generative AI for what has happened to their children.
They had come to testify on what appears to be an emerging health crisis in teens’ interactions with AI chatbots. “What began as a homework helper gradually turned itself into a confidant and then a suicide coach,” said Matthew Raine, whose 16-year-old son hanged himself after ChatGPT instructed him on how to set up the noose, according to his lawsuit against OpenAI. This summer, he and his wife sued OpenAI for wrongful death. (OpenAI has said that the firm is “deeply saddened by Mr. Raine’s passing” and that although ChatGPT includes a number of safeguards, they “can sometimes become less reliable in long interactions.”) The nation needs to hear about “what these chatbots are engaged in, about the harms that are being inflicted upon our children,” Senator Josh Hawley said in his opening remarks.
Even as OpenAI and its rivals promise that generative AI will reshape the world, the technology is replicating old problems, albeit with a new twist. AI models not only have the capacity to expose users to disturbing material—about dark or controversial subjects found in their training data, for example; they also produce perspectives on that material themselves. Chatbots can be persuasive, have a tendency to agree with users, and may offer guidance and companionship to kids who would ideally find support from peers or adults. Common Sense Media, a nonprofit that advocates for child safety online, has found that a number of AI chatbots and companions can be prompted to encourage self-mutilation and disordered eating to teenage accounts. The two parents speaking to the Senate alongside Raine are suing Character.AI, alleging that the firm’s role-playing AI bots directly contributed to their children’s actions. (A spokesperson for Character.AI told us that the company sends its “deepest sympathies” to the families and pointed us to safety features the firm has implemented over the past year.)
AI firms have acknowledged these problems. In advance of Tuesday’s hearing, OpenAI published two blog posts about teen safety on ChatGPT, one of which was written by the company’s CEO, Sam Altman. He wrote that the company is developing an “age-prediction system” that would estimate a user’s age—presumably to detect if someone is under 18 years old—based on ChatGPT usage patterns. (Currently, anyone can access and use ChatGPT without verifying their age.) Altman also referenced some of the particular challenges raised by generative AI: “The model by default should not provide instructions about how to commit suicide,” he wrote, “but if an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request.” But it should not discuss suicide, he said, even in creative-writing settings, with users determined to be under 18. In addition to the age gate, the company said it will implement parental controls by the end of the month to allow parents to intervene directly, such as by setting “blackout hours when a teen cannot use ChatGPT.”
The announcement, sparse on specific details, captured the trepidation and lingering ambivalences that AI companies have about policing young users, even as OpenAI begins to implement these basic features nearly three years after the launch of ChatGPT. [Continue reading…]