OpenAI says it is rolling out new safety measures for ChatGPT users under 18
OpenAI announced Tuesday that it is directing teens to an age-appropriate version of its ChatGPT technology as it seeks to bolster safeguards amid a period of heightened scrutiny over the chatbot’s safety.
Users of the chatbot identified as under the age of 18 will automatically be directed to a version of ChatGPT governed by “age-appropriate” content rules, OpenAI said in a statement. This under-age edition includes protection policies such as blocking sexual content and — “in rare cases of acute distress” — law enforcement to ensure a user’s safety, according to the company.
“The way ChatGPT responds to a 15-year-old should look different than the way it responds to an adult,” the company said in the announcement.
OpenAI also said it is introducing parental controls, such as enabling parents to link their account to their teen’s account, manage chat history, set blackout hours and more. The safeguards will be available by the end of September.
The announcement comes just days after the Federal Trade Commission (FTC) launched a probe into the potential negative effects of AI chatbot companions on children and teens. OpenAI said that it’s prioritizing “making ChatGPT helpful and safe for everyone, and we know safety matters above all else when young people are involved.
A spokesperson for OpenAI recently told CBS News that it’s prioritizing “making ChatGPT helpful and safe for everyone.”
Before the FTC probe, OpenAI indicated that it would introduce extra safety protections for vulnerable users and teens, after the parents of 16-year-old Adam Raine of California, who died by suicide in April, sued the company late last month. Raine’s family allege that ChatGPT led their teen to commit suicide.
It’s unclear how OpenAI plans to indentify users’ ages, however, it stated that if ChatGPT is unsure about someone’s age, or has incomplete information, it will default to the under-18 version.
OpenAI did not immediately respond to CBS MoneyWatch’s request for comment.
Other tech companies have taken similar steps to shield teen users from inappropriate content. YouTube, for example, announced a new age-estimation technology that will track the types of videos users watch and how long they’ve had their account to verify if they are under the age of 18.
According to an April Pew Research Center report, parents are generally more worried about the mental health of teenagers than are teens themselves. Among those parents who are at least somewhat concerned about teen mental health, 44% said social media had the biggest negative impact on adolescents.