
Hugging Face challenges DeepSeek: Project Open-R1 reveals secrets of Chinese AI
Hugging Face challenges DeepSeek: Project Open-R1 reveals secrets of Chinese AI The Hugging Face team presented the first results of the Open-R1 project aimed at reproducing the technologies of Chinese artificial intelligence DeepSeek-R1. Within a week, researchers managed to achieve significant progress in understanding and replicating this advanced system.
A key achievement was the successful reproduction of test results on the MATH-500 benchmark. Researchers confirmed impressive performance of various model versions: DeepSeek-R1-Distill-Qwen-32B achieved 95.0% accuracy compared to the claimed 94.3%, while the Llama-70B-based version showed 93.4% versus the official 94.5%.
During the study, a unique feature of DeepSeek-R1 was discovered – unprecedented length of generated responses. Analysis of distribution in the OpenThoughts dataset showed that the average response length is about 6000 tokens, and in some cases exceeds 20,000 tokens. “Considering that an average page contains approximately 500 words, and one token is slightly shorter than a word, many responses exceed 10 pages in volume,” researchers note.
To ensure research transparency, the Hugging Face team created an open Open-R1 leaderboard where the community can track progress in reproducing results. Special attention is paid to the issue of significant GPU memory requirements during training due to the need to generate long sequences.
The Open-R1 project, launched just a week ago, combined the efforts of various teams and the developer community. The main goal remains to reproduce the training pipeline and synthetic data of DeepSeek-R1, which will help better understand the operating principles of this advanced artificial intelligence system.
This initiative demonstrates a growing trend towards openness and collaboration in AI, where even the most complex technological achievements become the subject of collective study and reproduction by the global developer community.