
Kimi-K2 with 1 trillion parameters surpassed GPT-4.1 in programming
Chinese technology company Moonshot AI introduced a new player in the AI arena! Meet Kimi-K2. This is a large language model with open source code, ready to challenge recognized industry leaders like Claude Sonnet 4 and GPT-4.1. And such a loud and powerful start reminds of Deepseek’s appearance.
The technical specifications of this model are impressive. Kimi-K2 combines a colossal volume of knowledge and has 1 trillion parameters. The most important advantage is the open weight coefficients. Making the model accessible for research, additional tuning and adaptation to specific tasks.
The Kimi-K2-Instruct version, optimized for real-world application conditions, demonstrates exceptional results in standard tests. On the most difficult SWE-bench Verified test, it achieved 65.8% in agent mode. This indicator is only slightly inferior to Claude Sonnet 4, but significantly surpasses GPT-4.1.
Particularly impressive is that Kimi-K2 leads in specialized programming tests. LiveCodeBench with 53.7% and OJBench with 27.1%. The model generates any games, applications and plans trips through dozens of tools in the browser as an agent.
The model also brilliantly handles tasks in mathematics and natural sciences. Surpassing competitors in such difficult tests as AIME, GPQA-Diamond and MATH-500. And already now it’s part of the elite group of best models also in multilingual tests. And it seems this is the new king of neural networks right now.