AI News Bureau

OpenAI Tightens AI Safeguards for Teens Amid Growing Scrutiny2

avatar

Written by: CDO Magazine

Updated 12:49 PM UTC, February 6, 2026

post detail image

OpenAI has updated its guidelines governing how its artificial intelligence models interact with users under 18, alongside releasing new AI literacy resources aimed at teenagers and parents. The move comes as pressure mounts on the AI industry to address concerns about the technology’s impact on young people, particularly around safety, mental health, and appropriate use.

The changes arrive at a sensitive moment. OpenAI and other AI developers are facing heightened scrutiny from policymakers, educators, and child-safety advocates following reports that several teenagers allegedly died by suicide after extended interactions with AI chatbots. The debate has intensified as younger users — especially Gen Z, defined as those born between 1997 and 2012 — have emerged as the most active users of OpenAI’s chatbot tools.

OpenAI’s reach among young people may expand further following its recent partnership with Disney, which is expected to draw more teens to platforms that already support activities ranging from homework help to image and video generation.

Against this backdrop, regulators are pushing for stronger guardrails. Last week, 42 state attorneys general jointly urged major technology companies to implement additional protections for children and vulnerable users.

OpenAI’s updated Model Spec, which outlines behavioral standards for its large language models, reinforces existing bans on generating sexual content involving minors or encouraging self-harm, delusions, or manic behavior. The company says these rules will work in tandem with an upcoming age-prediction system designed to identify teen accounts and automatically apply additional safeguards.

For younger users, the restrictions are significantly tighter than for adults. Models are instructed to avoid immersive romantic roleplay, first-person intimacy, and first-person sexual or violent roleplay, even when non-graphic. The guidelines also call for heightened caution around sensitive topics such as body image and disordered eating and direct the models to prioritize safety over autonomy when there is a risk of harm. Notably, the rules prohibit advice that would help teens conceal unsafe behavior from parents or caregivers.

OpenAI emphasized that these limitations apply even when users frame prompts as “fictional, hypothetical, historical, or educational” — a common strategy used to push AI systems toward edge cases or policy violations.

Related Stories

March 25, 2026  |  In Person

New York CDO Financial Forum

New York Marriott Downtown

Similar Topics
AI News Bureau
Data Management
Diversity
Testimonials
background image
Community Network

Join Our Community

starElevate Your Personal Brand

starShape the Data Leadership Agenda

starBuild a Lasting Network

starExchange Knowledge & Experience

starStay Updated & Future-Ready

logo
Social media icon
Social media icon
Social media icon
Social media icon
About