
Anthropic CEO reveals why falling AI development costs are a trap
The head of Anthropic made an unexpected statement that the decreasing cost of artificial intelligence development could pose a serious threat to the technological leadership of democratic countries.
The statement was prompted by the recent success of Chinese company DeepSeek, which managed to create an AI model approaching leading American developments in some metrics at significantly lower costs.
“While many claims about the threat to American leadership in AI are greatly exaggerated, the current situation makes export control over chip supplies even more critical than a week ago,” emphasized Anthropic’s CEO.
According to the company leader, there is a clear pattern in AI development: increasing the scale of system training leads to systematic improvement in results across a wide range of cognitive tasks. For instance, a $1 million model can solve 20% of important programming tasks, $10 million – 40%, and $100 million – 60%.
AI development efficiency continues to grow thanks to innovations in model architecture and hardware utilization optimization. According to Anthropic experts’ estimates, the overall efficiency growth reaches approximately 4x per year. This is clearly demonstrated by the Claude 3.5 Sonnet model, which surpasses GPT-4 in almost all metrics at a tenfold lower API cost, although it was released 15 months later.
“Decreasing costs doesn’t mean we’ll use fewer chips for AI training. On the contrary, increased efficiency allows companies to create more advanced models within existing financial resources,” explained Anthropic’s head.
The expert emphasized that export control over chip supplies is not a way to avoid competition between the US and China, but a necessary measure to maintain the technological advantage of democratic countries. “Ultimately, American AI companies must create more advanced models than Chinese ones if we want to maintain leadership. But we shouldn’t provide the Chinese Communist Party with technological advantages when we can avoid it,” concluded Anthropic’s leader.