Chinese model orders of magnitude cheaper than Western analogs
The Chinese model Kimi K2-Thinking with a trillion parameters cost 4.5 million dollars in the final training stage. According to CNBC, this is orders of magnitude cheaper than Western analogs. The publication cites an anonymous source. And acknowledges that the figure couldn’t be independently confirmed. And Moonshot AI company itself hasn’t officially disclosed the cost.
The model is built on the Mixture of Experts architecture – a trillion parameters in total volume, but only a small part is active during operation. Created for complex reasoning tasks and interaction with tools. Such systems usually require astronomical computing costs, so several million looks almost ridiculous.
History repeats itself. At the end of 2024, Chinese DeepSeek trained the base model V3 for 5.5 million dollars. And the reasoning overlay R1 – for 294 thousand. Kimi K2-Thinking is also built on base K2, so the proportion is similar. However, these estimates don’t include experiments, testing, office rent and salaries.
In most benchmarks K2-Thinking holds at the level of leading Western models like GPT-5 Pro and Grok 4. And now about competitors’ prices. The final training stage of GPT-4, according to SemiAnalysis estimate, cost 63 million dollars. In the AI Index report an even larger sum of 78 million appears. And the full training of Grok 4 was estimated by Epoch AI analysts at astronomical 490 million dollars.
Either this is some magic of optimization, or someone is withholding something. Or someone is greatly overpaying for computations.