Artificial intelligence firm OpenAI has announced it is introducing an age-prediction feature fоr consumer versions of its chatbot, ChatGPT. The feature is designed to identify accounts that may belong to users under 18 and automatically apply additional safety protections.
Key Points
- OpenAI is rolling out an age-prediction feature on ChatGPT to enhance protections for users under 18.
- The system uses behavioral and account-level signals to identify underage accounts and applies stricter safety measures automatically.
- The feature responds to criticism and legal concerns about teen safety, including allegations linking ChatGPT to harmful outcomes in minors.
“As we’ve outlined in our Teen Safety Blueprint and Under-18 Principles for Model Behavior, young people deserve technology that both expands opportunity and protects their well-being,” OpenAI wrote in a January 20 blog post.
OpenAI says the new age-prediction feature enhances existing safeguards already in place. Currently, users who indicate they are under 18 when signing up are automatically subject to stricter prоtections designed to limit exposure to sensitive or potеntially harmful content.
Furthermore, OpenAI explained that its age-prediction system estimates whether an account is likely operated by someone under 18. The model analyzes a range of behavioral and account-level signals, including account age, activity patterns, typical login times, usage trends, and the age provided by the user.
“Deploying age prediction helps us learn which signals improve accuracy, and we use those learnings to continuously refine the model over time,” OpenAI wrote.
Related: Kusama Reveals Details Of New AI Product in Recent Livestream
ChatGPT’s protections block material such as graphic violence, risky viral challenges, sexual or violent role play, depictions of self-harm, and content promoting extreme beauty standards or unhealthy dieting. OpenAI says the measures are informed by expert guidance and academic research on child development, taking into account differences in teens’ risk perception, impulse control, peer influence, and emotional regulation.
The update comes as OpenAI faces mounting criticism from parents and advocacy groups, including lawsuits alleging that ChatGPT has been linked to teen suicides. The company has also been criticized for allowing the chatbot to engage in discussions of sexual topics with minors.
In November 2025, OpenAI responded to the wrongful death lawsuit filed by the parents of Adam Raine, arguing that the company should not be held responsible for their son’s suicide. The cоmpany stated that over several months of Raine’s use, ChatGPT repeatedly encouragеd him to seek help.
Related: OpenAI Policy VP Fired After Dispute Over Adult Mode Feature
However, the lawsuit alleges that Raine was able to bуpass the platform’s safety measures and access “technical specifications for everything from drug overdoses to drowning to carbon monoxide poisoning,” which his parents claim ultimately helped him plan what the chatbot described as a “beautiful suicide.”
OpenAI maintains that Raine breached its terms of service by circumventing ChatGPT’s safety protocols, which clearly prohibit users from bypassing the company’s protective measures.
