Anthropic CEO: Chinese AI failed safety test
Anthropic CEO Dario Amodei expressed serious concerns about DeepSeek, a Chinese company that recently surprised Silicon Valley with its R1 model. His concerns go beyond usual claims about user data transfer to China.
In an interview with Jordan Schneider’s ChinaTalk podcast, Amodei stated that the DeepSeek model generated sensitive information about biological weapons during safety testing conducted by Anthropic. “These were the worst results among all models we’ve ever tested,” Amodei claims. “It completely lacked any blocks against generating such information.”
According to Anthropic’s CEO, such evaluations are regularly conducted by the company for various AI models to identify potential national security risks. The team checks if models can generate information about biological weapons that is difficult to find on Google or in textbooks. Anthropic positions itself as a developer of foundation AI models with special focus on safety.
Amodei noted that current DeepSeek models don’t pose a “literal danger” in terms of providing rare and dangerous information, however, the situation might change in the near future. Although he highly rated the DeepSeek team as “talented engineers,” Amodei urged the company to “take AI safety seriously.”
In the ChinaTalk interview, Amodei didn’t specify which DeepSeek model Anthropic tested, and didn’t provide additional technical details about the conducted tests. Neither Anthropic nor DeepSeek responded to TechCrunch’s request for comment.
Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.
Qualcomm welcomes TSMC's $100 billion investmentTaiwan Semiconductor Manufacturing Co. (TSMC)'s $100 billion investment in expanding production in the United States is "great news," said Qualcomm CEO Cristiano Amon in an interview with CNBC on Tuesday, March 4, 2025. According to him, this contributes to the diversification of semiconductor manufacturing locations.
DuckDuckGo strengthens generative AI integrationThe privacy-focused search service DuckDuckGo continues to strengthen its position in the field of generative artificial intelligence. According to a blog post published on Thursday, March 6, 2025, the company announced the completion of beta testing for its chat interface, which is now officially called Duck.ai, abandoning the more cumbersome name DuckDuckGo AI Chat.
Digital scandal at Los Angeles TimesBillionaire and Los Angeles Times owner Patrick Soon-Shiong, who introduced a new AI tool for generating opposing perspectives to opinion section materials, was unaware that the system created pro-KKK arguments less than 24 hours after launch — and even hours after the scandalous AI comments were removed from the publication's website. The incident created a huge obstacle for the Times, which seeks to bring back old subscribers and attract new ones through innovative technological solutions.
Google Shopping launches AI toolGoogle announced the launch of a new AI tool for the Shopping tab that will help users find clothing based on their verbal description. The announcement, made on Wednesday, March 5, 2025, also includes expanding the capabilities of augmented reality (AR) tools for cosmetics and virtual try-on.
"Cannot help with answers about elections and political figures": GeminiTechnology giant Google continues to limit the capabilities of its AI assistant Gemini in the area of political discourse, despite the fact that the company's main competitors, including OpenAI, Anthropic, and Meta, have already adapted their chatbots to discuss politically sensitive topics in recent months.