OpenAI fires safety experts and reduces tests to days

Post Thumbnail

Alarming changes at OpenAI. They are laying off engineers responsible for protection against leaks, data theft and other critically important threats.

And what’s interesting is that OpenAI is firing experienced specialists and hiring new employees in their places. The official explanation sounds vague, I quote – “the company has grown and now faces threats of a different level”.

But this is just the tip of the iceberg in the company! In parallel, there’s unprecedented acceleration of product releases at the expense of ignoring their own safety testing procedures. If previously model verification took months of careful analysis, now timeframes are compressed to several days.

The most alarming indicator is the change in approach to final model versions. Final checkpoints may not undergo verification at all, and only intermediate versions are tested. At the same time, almost all tests are automated. Which actually means absence of human oversight over potentially dangerous aspects of artificial intelligence.

Reminds me of an old joke. An employee tells the boss: We have a hole in security. And the boss replies – Thank God, at least something is in our security.

Почитать из последнего
UBTech will send Walker S2 robots to serve on China's border for $37 million
Chinese company UBTech won a contract for $37 million. And will send humanoid robots Walker S2 to serve on China's border with Vietnam. South China Morning Post reports that the robots will interact with tourists and staff, perform logistics operations, inspect cargo and patrol the area. And characteristically — they can independently change their battery.
Anthropic accidentally revealed an internal document about Claude's "soul"
Anthropic accidentally revealed the "soul" of artificial intelligence to a user. And this is not a metaphor. This is a quite specific internal document.
Jensen Huang ordered Nvidia employees to use AI everywhere
Jensen Huang announced total mobilization under the banner of artificial intelligence inside Nvidia. And this is no longer a recommendation. This is a requirement.
AI chatbots generate content that exacerbates eating disorders
A joint study by Stanford University and the Center for Democracy and Technology showed a disturbing picture. Chatbots with artificial intelligence pose a serious risk to people with eating disorders. Scientists warn that neural networks hand out harmful advice about diets. They suggest ways to hide the disorder and generate "inspiring weight loss content" that worsens the problem.
OpenAGI released the Lux model that overtakes Google and OpenAI
Startup OpenAGI released the Lux model for computer control and claims this is a breakthrough. According to benchmarks, the model overtakes analogues from Google, OpenAI and Anthropic by a whole generation. Moreover, it works faster. About 1 second per step instead of 3 seconds for competitors. And 10 times cheaper in cost per processing 1 token.