
New Alibaba Cloud model becomes world’s best in technical tasks
Alibaba Cloud has achieved impressive success with its new language model Qwen2.5-Max, which ranked seventh in the prestigious global Chatbot Arena ranking. A particularly significant achievement was first place in mathematics and programming categories, as well as second place in solving complex problems.
Qwen2.5-Max, built on the Mixture of Experts (MoE) architecture, was trained on a dataset of more than 20 trillion tokens. The model was improved using Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) technologies, which allowed achieving exceptional results in knowledge domains, programming, and general capabilities.
The model’s high performance is confirmed by leading positions in key industry benchmarks, including MMLU-Pro, LiveCodeBench, LiveBench, and Arena-Hard. This demonstrates Qwen2.5-Max’s ability to effectively solve a wide range of complex tasks at the level of the world’s best developments.
Alibaba Cloud provides global access to Qwen2.5-Max through its Model Studio generative AI development platform, offering an optimal combination of high performance and cost-effectiveness. Users can also test the model’s capabilities on the Qwen Chat platform.
Over the past year, the company has significantly expanded the Qwen model family, releasing a series of solutions of various scales for working with text, audio, and visual content. This development reflects Alibaba Cloud’s commitment to meeting the growing demand for AI technologies from developers and clients worldwide.
The success of Qwen2.5-Max strengthens China’s position in global artificial intelligence competition, demonstrating the ability to create cutting-edge technologies capable of competing with leading world developments.