Post Thumbnail

California shut up AI companions: New safety law

California became the first state to officially shut up AI companion chatbots. Governor Gavin Newsom signed a historic law that requires operators of such bots to implement safety protocols.

Now all these so-called digital friends – from giants like OpenAI to specialized startups like Character AI and Replika – will bear legal responsibility if their creations don’t meet standards.

I’ll say right away why we care what happens in some US state. The thing is, California is home to 35 of the 50 leading AI companies in the world. OpenAI, Anthropic, Musk’s xAI and Google are registered right there and will follow these laws. Which will affect all of us.

And you know what pushed authorities to this step? A series of tragedies you don’t even want to talk about. Law number SB 243 was introduced in January by senators Steve Padilla and Josh Becker, but it got real momentum after the case with teenager Adam Rein, after a long series of suicidal conversations with ChatGPT from OpenAI. Then leaked internal documents surfaced that allegedly showed other chatbots were allowed to have romantic and sensual conversations with children. And very recently a family from Colorado filed a lawsuit against startup Character AI after what happened to their 13-year-old daughter.

California’s governor didn’t diplomate in his statement. Like, we’ve seen truly horrifying and tragic examples of young people harmed by unregulated technologies. And we won’t stand aside while companies continue operating without necessary restrictions and responsibility. The safety of our children is not for sale – those are his words.

The law takes effect January 1 next year and requires companies to implement a whole set of features. Age verification, warnings about social media and companion chatbots – that’s just the beginning. Companies must also establish certain protocols that will be submitted to the state health department along with statistics on how the service provided users with notifications about crisis prevention centers.

But that’s not all. According to the bill’s wording, platforms must clearly indicate that any interactions are artificially generated. And chatbots must not represent themselves as medical workers. And companies are obligated to offer minors reminders about breaks and prevent them from viewing sexually explicit images created by the bot.

And now the most interesting part. SB 243 is already the second significant AI regulation from California in recent weeks. On September 29, California Governor Newsom signed law SB 53. Establishing new transparency requirements for large AI companies.

The bill requires large labs like OpenAI, Anthropic and Google DeepMind to be transparent regarding safety protocols. And provides whistleblower protection for employees of these companies. And ahead is planned a maximally strict law. Prohibiting the use of AI chatbots as a replacement for licensed psychological help.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.
Latest News
Google buried the idea of omnipotent AI doctor

Google company released a report on Health AI Agents of 150 pages. That's 7,000 annotations, over 1,100 hours of expert work. Link in description. Numbers impressive, yes. But the point isn't in metrics. The point is they buried the very idea of an omnipotent AI doctor. And this is perhaps the most honest thing that happened in this industry recently.

Teenagers on TikTok scare parents with fake AI vagrants

You know what's considered a fun prank among teenagers now? Sending parents a photo of a homeless vagrant in their own living room. AI draws it, TikTok approves it, and let parents have hysteria. That's the kind of fun going around social media.

California shut up AI companions: New safety law

California became the first state to officially shut up AI companion chatbots. Governor Gavin Newsom signed a historic law that requires operators of such bots to implement safety protocols.

Musk creates virtual worlds with AI for training robots

Elon Musk decided his artificial intelligence got bored without work. And now xAI company will engage in creating virtual worlds. Moreover, not just beautiful pictures, but real simulations. Where objects interact with each other according to physical laws. They'll start, of course, with games - where would we be without them in 2025.

Project REBIRTH: AI will wrap falling airliner in protective cocoon

Imagine. A plane crashed, everyone died except one person. The worst aviation disaster in 10 years. And here 2 engineers from India say they figured out how to prevent this. Giant airbags controlled by artificial intelligence that will wrap a falling plane in a protective cocoon. Sounds like science fiction? And they're already nominated for the James Dyson Award.