Post Thumbnail

DeepSeek open sources super-fast GPU kernels

Chinese company DeepSeek, which has made a breakthrough in the field of artificial intelligence, has begun an unprecedented week of open source releases, launching the first of five promised tools – FlashMLA. This project represents optimized GPU kernels that the company uses in its production systems.

FlashMLA implements multi latent attention (MLA) technology, a revolutionary method that significantly reduces memory consumption in transformers by efficiently compressing key and value matrices. Although the method itself has already proven its effectiveness in DeepSeek models, until today, optimized implementations for it practically did not exist.

The key technical characteristics of FlashMLA are impressive:
– Support for bfloat16 format, providing an optimal balance between computation speed and accuracy
– KV page cache with block size 64
– Record performance: up to 3000 GB/s in memory-bound configuration
– 580 teraflops in compute-bound configuration on H800 SXM5 GPU using CUDA 12.6

The tool is fully compatible with the entire line of NVIDIA Hopper graphics processors, including H100, H800, and other models. FlashMLA is particularly effective when processing variable-length sequences, making it an ideal solution for modern natural language processing tasks.

DeepSeek plans to continue publishing its internal developments: from February 24 to 28, the company promises to release four more repositories from its internal ecosystem to open access. This decision could significantly impact the development of the entire AI industry by providing developers with access to advanced optimizations previously available only within the company.

The project code is already available on GitHub (github.com/deepseek-ai/FlashMLA), allowing developers from around the world to begin integrating these optimizations into their projects, potentially significantly improving the performance of their AI systems.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.
Latest News
AI chatbots generate content that exacerbates eating disorders

A joint study by Stanford University and the Center for Democracy and Technology showed a disturbing picture. Chatbots with artificial intelligence pose a serious risk to people with eating disorders. Scientists warn that neural networks hand out harmful advice about diets. They suggest ways to hide the disorder and generate "inspiring weight loss content" that worsens the problem.

OpenAGI released the Lux model that overtakes Google and OpenAI

Startup OpenAGI released the Lux model for computer control and claims this is a breakthrough. According to benchmarks, the model overtakes analogues from Google, OpenAI and Anthropic by a whole generation. Moreover, it works faster. About 1 second per step instead of 3 seconds for competitors. And 10 times cheaper in cost per processing 1 token.

Altman declared red alert at OpenAI due to Google's successes

Sam Altman declared "red alert level" at OpenAI, and this is not just corporate drama. This is an admission that the market leader felt competitors breathing down their neck. According to an internal memo, he is mobilizing additional resources to improve ChatGPT amid growing threats from Google.

Users spend more time with Gemini than with ChatGPT

OpenAI still leads in user numbers, but people are starting to spend more time with competitors. And this creates a serious problem.

Companies are bringing back 5% of those fired due to AI implementation failure

Many companies began bringing back employees fired because of artificial intelligence. Analytics company Visier studied employment data of 2.5 million employees from 142 companies worldwide. About 5% of fired employees subsequently returned to their previous employer. This indicator remained stable for several years, but recently began to rise.