Post Thumbnail

OpenAI fires safety experts and reduces tests to days

Alarming changes at OpenAI. They are laying off engineers responsible for protection against leaks, data theft and other critically important threats.

And what’s interesting is that OpenAI is firing experienced specialists and hiring new employees in their places. The official explanation sounds vague, I quote – “the company has grown and now faces threats of a different level”.

But this is just the tip of the iceberg in the company! In parallel, there’s unprecedented acceleration of product releases at the expense of ignoring their own safety testing procedures. If previously model verification took months of careful analysis, now timeframes are compressed to several days.

The most alarming indicator is the change in approach to final model versions. Final checkpoints may not undergo verification at all, and only intermediate versions are tested. At the same time, almost all tests are automated. Which actually means absence of human oversight over potentially dangerous aspects of artificial intelligence.

Reminds me of an old joke. An employee tells the boss: We have a hole in security. And the boss replies – Thank God, at least something is in our security.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.

Latest News

Samsung seeks replacement for Google Gemini for Galaxy S26

Samsung Electronics, one of the leading mobile device manufacturers, is actively seeking alternatives to Google Gemini for its future Galaxy S26 lineup. The company is conducting negotiations with OpenAI and Perplexity, striving to expand the artificial intelligence ecosystem in its devices.

How language models transfer knowledge through random numbers

Have you ever wondered if numbers can store knowledge? Scientists discovered an amazing phenomenon. Language models can transfer their behavioral traits through sequences of digits that look like random noise.

Alibaba introduced Quark AI smart glasses with Snapdragon AR1 chip

Chinese tech giant Alibaba introduced its first model of Quark AI smart glasses at the World Conference on Artificial Intelligence in Shanghai.

Why advanced AI models confuse themselves during long reasoning

You give a complex task to a smart person and expect that the longer they think, the more accurate the answer will be. Logical, right? That's exactly how we're used to thinking about artificial intelligence work too. But new research from Anthropic shows that reality is much more interesting.

Z.AI introduced GLM-4.5 with 355 billion parameters and open source

Meet the new technological heavyweight! Z.AI company introduced the open language model GLM-4.5, which is ready to challenge Western giants not only with capabilities but also with accessibility.