Post Thumbnail

ChatGPT Agent received high risk status for biological weapons

If you look at OpenAI’s ChatGPT Agent system card, you’ll find something surprising there. This AI system became the first in the world to officially receive “high risk” status for biological weapons development. Just think – technology that millions use daily is recognized as potentially dangerous. Link in description.

What does this status mean? According to official OpenAI documentation, Agent can significantly help even a non-specialist go through all necessary steps to create known biological or chemical threats. Until now, no company other than OpenAI has made such statements about their models.

Such recognition led to the introduction of unprecedented safety measures. Now tools for detecting potentially dangerous requests work not only at the response generation stage, but also before the request is even transmitted to the model.

Particularly interesting is that users are now required to report discovery of unusual system behavior. If you even accidentally encounter a strange model reaction, you must report it. Otherwise your account may be blocked. And you yourself may come under investigation.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.
Latest News
$200 USB cable transforms into autonomous AI hacker

Researchers from Palisade Research created a new cybersecurity threat. A modified USB cable that becomes a conduit for autonomous AI into computer systems. The $200 device contains a programmable microchip that loads a digital agent directly onto the target machine.

xAI lays off 500 annotators for Grok's expert specialization

A strategic pivot from xAI is emerging. The company is radically changing its approach to training its Grok language model! Elon Musk's team fired 500 universal annotators in one day. Instead, it's increasing the number of specialized AI tutors by 10 times.

Gemini content review time reduced from 30 to 15 minutes

Alarming signals from Google's internal kitchen were published by The Guardian. Content evaluators for the Gemini model shared interesting information about declining review standards. Employees of contractor GlobalLogic, responsible for assessing quality and safety of AI responses before release, are sounding alarms.

Golden chassis and contextual understanding in Tesla's new generation

Tesla introduced a new humanoid robot Optimus with integrated Grok from xAI. Salesforce CEO Marc Benioff personally tested the prototype, asking it to bring a soda. The robot demonstrated meaningful contextual understanding and dialogue capability. Although several clarifying commands were needed.

Microsoft diversifies partnerships: Claude Sonnet 4 in Office

Microsoft made a strategic decision to diversify its AI partnerships. The company signed an agreement with Anthropic, creator of the Claude model. To implement their technologies in Office applications.