
California shut up AI companions: New safety law
California became the first state to officially shut up AI companion chatbots. Governor Gavin Newsom signed a historic law that requires operators of such bots to implement safety protocols.
Now all these so-called digital friends – from giants like OpenAI to specialized startups like Character AI and Replika – will bear legal responsibility if their creations don’t meet standards.
I’ll say right away why we care what happens in some US state. The thing is, California is home to 35 of the 50 leading AI companies in the world. OpenAI, Anthropic, Musk’s xAI and Google are registered right there and will follow these laws. Which will affect all of us.
And you know what pushed authorities to this step? A series of tragedies you don’t even want to talk about. Law number SB 243 was introduced in January by senators Steve Padilla and Josh Becker, but it got real momentum after the case with teenager Adam Rein, after a long series of suicidal conversations with ChatGPT from OpenAI. Then leaked internal documents surfaced that allegedly showed other chatbots were allowed to have romantic and sensual conversations with children. And very recently a family from Colorado filed a lawsuit against startup Character AI after what happened to their 13-year-old daughter.
California’s governor didn’t diplomate in his statement. Like, we’ve seen truly horrifying and tragic examples of young people harmed by unregulated technologies. And we won’t stand aside while companies continue operating without necessary restrictions and responsibility. The safety of our children is not for sale – those are his words.
The law takes effect January 1 next year and requires companies to implement a whole set of features. Age verification, warnings about social media and companion chatbots – that’s just the beginning. Companies must also establish certain protocols that will be submitted to the state health department along with statistics on how the service provided users with notifications about crisis prevention centers.
But that’s not all. According to the bill’s wording, platforms must clearly indicate that any interactions are artificially generated. And chatbots must not represent themselves as medical workers. And companies are obligated to offer minors reminders about breaks and prevent them from viewing sexually explicit images created by the bot.
And now the most interesting part. SB 243 is already the second significant AI regulation from California in recent weeks. On September 29, California Governor Newsom signed law SB 53. Establishing new transparency requirements for large AI companies.
The bill requires large labs like OpenAI, Anthropic and Google DeepMind to be transparent regarding safety protocols. And provides whistleblower protection for employees of these companies. And ahead is planned a maximally strict law. Prohibiting the use of AI chatbots as a replacement for licensed psychological help.