
Hugging Face speeds up data processing by 3x
The Xet team at Hugging Face introduced a new approach to optimizing data upload and download on the Hub platform, which allows for 2-3 times faster file processing. The technology is based on an improved Content-Defined Chunking (CDC) method, which revolutionizes the way information is stored and transmitted.
The scale of the problem is impressive: the Hub platform stores nearly 45 petabytes of data distributed across 2 million repositories of models, datasets, and spaces. With a standard approach to splitting files into 64 KB chunks, uploading a 200 GB repository would require creating 3 million storage records. At the platform scale, this could lead to 690 billion chunks.
The Hugging Face team identified serious problems that arise when simply striving for maximum data deduplication through chunk size reduction. Millions of separate requests during each upload and download create critical load on network infrastructure. There’s also excessive load on databases and storage systems, leading to significant increases in metadata management costs in services like DynamoDB and S3.
To solve these problems, the company developed and open-sourced xet-core and hf_xet tools, written in Rust and integrated with huggingface_hub. The new approach focuses not only on data deduplication but also on optimizing network transfer, storage, and overall development experience.
The team’s main goal is to ensure fast experimentation and effective collaboration for teams working on models and datasets.