Post Thumbnail

DeepSeek open sources super-fast GPU kernels

Chinese company DeepSeek, which has made a breakthrough in the field of artificial intelligence, has begun an unprecedented week of open source releases, launching the first of five promised tools – FlashMLA. This project represents optimized GPU kernels that the company uses in its production systems.

FlashMLA implements multi latent attention (MLA) technology, a revolutionary method that significantly reduces memory consumption in transformers by efficiently compressing key and value matrices. Although the method itself has already proven its effectiveness in DeepSeek models, until today, optimized implementations for it practically did not exist.

The key technical characteristics of FlashMLA are impressive:
– Support for bfloat16 format, providing an optimal balance between computation speed and accuracy
– KV page cache with block size 64
– Record performance: up to 3000 GB/s in memory-bound configuration
– 580 teraflops in compute-bound configuration on H800 SXM5 GPU using CUDA 12.6

The tool is fully compatible with the entire line of NVIDIA Hopper graphics processors, including H100, H800, and other models. FlashMLA is particularly effective when processing variable-length sequences, making it an ideal solution for modern natural language processing tasks.

DeepSeek plans to continue publishing its internal developments: from February 24 to 28, the company promises to release four more repositories from its internal ecosystem to open access. This decision could significantly impact the development of the entire AI industry by providing developers with access to advanced optimizations previously available only within the company.

The project code is already available on GitHub (github.com/deepseek-ai/FlashMLA), allowing developers from around the world to begin integrating these optimizations into their projects, potentially significantly improving the performance of their AI systems.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.
Latest News
Sam Altman promises to return humanity to ChatGPT

OpenAI head Sam Altman made a statement after numerous offline and online protests against shutting down the GPT-4o model occurred. And then turning it on, but with a wild router. I talked about this last week in maximum detail. Direct quote from OpenAI head.

AI comes to life: Why Anthropic co-founder fears his creation

Anthropic co-founder Jack Clark published an essay that makes you uneasy. He wrote about the nature of modern artificial intelligence, and his conclusions sound like a warning.

Google buried the idea of omnipotent AI doctor

Google company released a report on Health AI Agents of 150 pages. That's 7,000 annotations, over 1,100 hours of expert work. Link in description. Numbers impressive, yes. But the point isn't in metrics. The point is they buried the very idea of an omnipotent AI doctor. And this is perhaps the most honest thing that happened in this industry recently.

Teenagers on TikTok scare parents with fake AI vagrants

You know what's considered a fun prank among teenagers now? Sending parents a photo of a homeless vagrant in their own living room. AI draws it, TikTok approves it, and let parents have hysteria. That's the kind of fun going around social media.

California shut up AI companions: New safety law

California became the first state to officially shut up AI companion chatbots. Governor Gavin Newsom signed a historic law that requires operators of such bots to implement safety protocols.