
OpenAI fires safety experts and reduces tests to days
Alarming changes at OpenAI. They are laying off engineers responsible for protection against leaks, data theft and other critically important threats.
And what’s interesting is that OpenAI is firing experienced specialists and hiring new employees in their places. The official explanation sounds vague, I quote – “the company has grown and now faces threats of a different level”.
But this is just the tip of the iceberg in the company! In parallel, there’s unprecedented acceleration of product releases at the expense of ignoring their own safety testing procedures. If previously model verification took months of careful analysis, now timeframes are compressed to several days.
The most alarming indicator is the change in approach to final model versions. Final checkpoints may not undergo verification at all, and only intermediate versions are tested. At the same time, almost all tests are automated. Which actually means absence of human oversight over potentially dangerous aspects of artificial intelligence.
Reminds me of an old joke. An employee tells the boss: We have a hole in security. And the boss replies – Thank God, at least something is in our security.