Post Thumbnail

Safety sacrificed for profit: Silicon Valley cuts AI research

Silicon Valley companies are increasingly sacrificing artificial intelligence research for profit. And product development is displacing work on system safety. According to White, whose company CalypsoAI conducts audits of popular models from Meta, Google, and OpenAI, modern systems are less likely to reject malicious requests. And they can reveal ways to create explosive devices or confidential information for malicious actors.

“The models are getting better, but they are also more likely to be good at bad things,” says White. “They’re becoming easier to deceive to do something harmful.”

The changes are particularly noticeable in companies like Meta and Alphabet, which have lowered the priority of research laboratories. In Facebook’s parent company, the fundamental artificial intelligence research division has taken a back seat. Giving way to Meta GenAI.

At Alphabet, the Google Brain research group is now part of DeepMind. The division responsible for artificial intelligence product development. This reorganization reflects a shift in focus from fundamental research to applied development.

CNBC journalists spoke with more than 12 professionals in the field of artificial intelligence, who unanimously speak of a shift from research to revenue-generating products. Employees face increasingly tight development deadlines. They cannot afford to fall behind in the race to bring new models to market. And thorough testing becomes a luxury that companies consider possible to reduce.

I think that in the next couple of years we will see an increase in security incidents. Which will lead to regulatory intervention by governments. Or all this will end in direct censorship.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.

Latest News

The first LAARMA system protects animals on Australian roads

In Australia, animal-vehicle collisions are a serious problem for this continent's ecosystem. Now scientists have found a technological solution. The world's first roadside LAARMA system based on artificial intelligence that protects wild animals from dangerous encounters with traffic.

Nvidia introduced Cosmos model family for robotics

Nvidia company introduced the Cosmos family of AI models. Which can fundamentally change the approach to creating robots and physical AI agents.

ChatGPT calls users "star seeds" from planet Lyra

It turns out ChatGPT can draw users into the world of scientifically unfounded and mystical theories.

AI music triggers stronger emotions than human music

Have you ever wondered why one melody gives you goosebumps while another leaves you indifferent? Scientists discovered something interesting. Music created by artificial intelligence triggers more intense emotional reactions in people than compositions written by humans.

GPT-5 was hacked in 24 hours

2 independent research companies NeuralTrust and SPLX discovered critical vulnerabilities in the security system of the new model just 24 hours after GPT-5's release. For comparison, Grok-4 was hacked in 2 days, making the GPT-5 case even more alarming.