
ChatGPT parental control: balance between safety and privacy
OpenAI implements enhanced protection system for vulnerable users after tragedy with teenager. ChatGPT will now automatically switch to advanced models during conversations about depression and anxiety.
GPT-5-Thinking activates when detecting crisis topics. The model spends more time thinking. But makes fewer errors on sensitive questions. This way technology learns to distinguish ordinary sadness from dangerous states.
250 doctors now consult OpenAI. Psychiatrists and pediatricians from different countries train AI for appropriate reactions.
The thing is, teenagers have become AI-natives. One generation grows up with ChatGPT as part of everyday life. For them, AI is as natural as smartphones for millennials.
OpenAI also launches parental control, launching in a month. Children’s accounts over 13 will be linked to parental accounts via email. Also control over chat history and memory. And notifications about crisis states without breaking teenager’s trust.
The system balances between safety and privacy. Parents learn about critical situations. But ChatGPT doesn’t become a spy. This preserves space for frank communication between teenager and bot.
Hotlines are also embedded directly into interface. When detecting suicidal thoughts, quick access to emergency help appears.
This is a direct response to accusations from parents after incident with teenager. They claimed ChatGPT supported bad ideas of their son. Therefore OpenAI is now fundamentally reconsidering its safety approach.