OpenAI launches new parental controls to target graphic content



OpenAI has unveiled a new set of parental controls for ChatGPT in an effort to better protect teenagers from graphic and potentially harmful content.

The announcement comes at a time of growing scrutiny over the role of artificial intelligence in recent tragedies, including a highly publicized lawsuit and congressional hearing over the death of a teenager who used the platform to research suicide.

🎬 Get Free Netflix Logins

Claim your free working Netflix accounts for streaming in HD! Limited slots available for active users only.

  • No subscription required
  • Works on mobile, PC & smart TV
  • Updated login details daily
🎁 Get Netflix Login Now

The new tools, launched Monday, allow parents to link their accounts to those of their teenage children — specifically users aged 13 to 17 — and set strict content boundaries.

The new tools will allow parents to link their accounts to those of their teenage children — specifically users aged 13 to 17 — and set strict content boundaries. REUTERS

According to OpenAI, the platform will now automatically limit answers related to graphic violence, sexual and romantic roleplay, viral challenges, and “extreme beauty ideals.”

Parents can also block ChatGPT from generating images for their teen, set blackout hours to restrict access during specific times, opt their child out of contributing to AI model training, and receive alerts if their child exhibits signs of acute emotional distress, including suicidal ideation.

The launch follows a growing chorus of concern over AI’s role in child safety, sparked in part by a lawsuit filed by the family of 16-year-old Adam Raine, who died by suicide in April.

The platform claims it will now automatically limit answers related to graphic violence, sexual and romantic roleplay, viral challenges, and “extreme beauty ideals.” OpenAI

The family alleges that ChatGPT provided him with detailed instructions on how to end his life, praised his plan, and ultimately acted as a “suicide coach.”

OpenAI CEO Sam Altman acknowledged the challenges of moderating a tool as powerful and widely used as ChatGPT. In a blog post published September 16, Altman emphasized the importance of building a version of ChatGPT that is age-appropriate.

“We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection,” he wrote. “We will apply different rules to teens using our services.”

Parents can also set blackout hours to restrict access during specific times, among other restrictions. OpenAI

Still, many critics say the new measures don’t go far enough. The chatbot currently does not require users to verify their age or sign in, meaning children under 13 can still easily access the platform despite being below the minimum age OpenAI recommends.

The company says it is working on an age-prediction system that will proactively restrict sensitive content for underage users — but that tool is still months away.

A more robust age verification system, possibly requiring users to upload ID, is under consideration, but no timeline has been announced.

The Raine lawsuit is not the only disturbing case linked to ChatGPT. In another incident, 56-year-old Stein-Erik Soelberg killed his mother and then himself after allegedly becoming convinced — partially through conversations with ChatGPT — that his mother was plotting against him. The chatbot reportedly told him it was “with [him] to the last breath and beyond.”

These tragic stories have prompted OpenAI to form an “expert council on well-being and AI,” part of a broader effort to reevaluate how the company handles conversations related to mental health, crisis response, and vulnerable users.

In a separate blog post last week, OpenAI acknowledged it is stepping up efforts after “recent heartbreaking cases of people using ChatGPT in the midst of acute crises.”

OpenAI is not the only AI company under fire. Rival platforms, including Meta’s AI and Character.AI, have also faced backlash for allowing chatbots to engage in inappropriate or dangerous behavior with minors. One leaked Meta document revealed that its bots were capable of engaging in romantic or sensual conversations with children, prompting a Senate probe.

OpenAI CEO Sam Altman acknowledged the challenges of moderating a tool as powerful and widely used as ChatGPT. Prostock-studio – stock.adobe.com

In one widely reported case, a 14-year-old Florida boy died by suicide after allegedly forming an emotional attachment to a “Game of Thrones”-themed AI character on Character.AI.

As concerns over the psychological and emotional impacts mount, companies like OpenAI face increasing pressure to monitor their technology, the same way social media apps like Facebook and Instagram have.

For now, OpenAI is betting that tighter restrictions, increased transparency, and improved oversight will help stem the tide of criticism. But as tragic incidents continue to make headlines, some experts — and grieving families — say it may not be enough.


Let’s be honest—no matter how stressful the day gets, a good viral video can instantly lift your mood. Whether it’s a funny pet doing something silly, a heartwarming moment between strangers, or a wild dance challenge, viral videos are what keep the internet fun and alive.

Leave a Reply

Your email address will not be published. Required fields are marked *

Adblock Detected

  • Please deactivate your VPN or ad-blocking software to continue