Safety sacrificed for profit: Silicon Valley cuts AI research

Post Thumbnail

Silicon Valley companies are increasingly sacrificing artificial intelligence research for profit. And product development is displacing work on system safety. According to White, whose company CalypsoAI conducts audits of popular models from Meta, Google, and OpenAI, modern systems are less likely to reject malicious requests. And they can reveal ways to create explosive devices or confidential information for malicious actors.

“The models are getting better, but they are also more likely to be good at bad things,” says White. “They’re becoming easier to deceive to do something harmful.”

The changes are particularly noticeable in companies like Meta and Alphabet, which have lowered the priority of research laboratories. In Facebook’s parent company, the fundamental artificial intelligence research division has taken a back seat. Giving way to Meta GenAI.

At Alphabet, the Google Brain research group is now part of DeepMind. The division responsible for artificial intelligence product development. This reorganization reflects a shift in focus from fundamental research to applied development.

CNBC journalists spoke with more than 12 professionals in the field of artificial intelligence, who unanimously speak of a shift from research to revenue-generating products. Employees face increasingly tight development deadlines. They cannot afford to fall behind in the race to bring new models to market. And thorough testing becomes a luxury that companies consider possible to reduce.

I think that in the next couple of years we will see an increase in security incidents. Which will lead to regulatory intervention by governments. Or all this will end in direct censorship.

Почитать из последнего
UBTech will send Walker S2 robots to serve on China's border for $37 million
Chinese company UBTech won a contract for $37 million. And will send humanoid robots Walker S2 to serve on China's border with Vietnam. South China Morning Post reports that the robots will interact with tourists and staff, perform logistics operations, inspect cargo and patrol the area. And characteristically — they can independently change their battery.
Anthropic accidentally revealed an internal document about Claude's "soul"
Anthropic accidentally revealed the "soul" of artificial intelligence to a user. And this is not a metaphor. This is a quite specific internal document.
Jensen Huang ordered Nvidia employees to use AI everywhere
Jensen Huang announced total mobilization under the banner of artificial intelligence inside Nvidia. And this is no longer a recommendation. This is a requirement.
AI chatbots generate content that exacerbates eating disorders
A joint study by Stanford University and the Center for Democracy and Technology showed a disturbing picture. Chatbots with artificial intelligence pose a serious risk to people with eating disorders. Scientists warn that neural networks hand out harmful advice about diets. They suggest ways to hide the disorder and generate "inspiring weight loss content" that worsens the problem.
OpenAGI released the Lux model that overtakes Google and OpenAI
Startup OpenAGI released the Lux model for computer control and claims this is a breakthrough. According to benchmarks, the model overtakes analogues from Google, OpenAI and Anthropic by a whole generation. Moreover, it works faster. About 1 second per step instead of 3 seconds for competitors. And 10 times cheaper in cost per processing 1 token.