
New model from DeepSeek recognizes documents cheaply and efficiently
DeepSeek rolled out a new model for document recognition. And you know what? It doesn’t just read text from pages – it understands structure. And does this cheaply and efficiently, which is rare in the AI world.
This wonder is called DeepSeek-OCR, and the difference from classic optical character recognition systems is fundamental. Regular OCR simply extract text. But this model immediately restores document structure: headings, lists, tables, figure captions. Outputs result in Markdown format, which is convenient for indexing and subsequent work of neural networks.
The main feature – so-called optical context compression. The model doesn’t retell every detail from the page, but squeezes out only what’s needed: text and semantic structure. This reduces data volume 20-fold. And fewer tokens – cheaper and faster processing by any subsequent language model.
DeepSeek-OCR uses visual tokens. These are conditional glances at parts of the image. Even with a small budget of 100 tokens, recognition accuracy holds at 97%. If the page is too complex, Gundam mode turns on. The document is automatically divided into fragments, and difficult areas are analyzed separately without speed loss.
In benchmarks the system showed impressive results. And accuracy practically doesn’t drop even with minimal number of visual tokens, and compression ratio reaches 20-fold. Efficiency in pure form.