
DeepMind created AI to save endangered species through sounds
DeepMind introduced Perch 2.0 — an AI that listens to nature and saves endangered species. This compact model has been downloaded over 250,000 times and is revolutionizing concepts of bioacoustic monitoring! Link in description.
Imagine the scale of innovation. One hour of recording from tropical rainforest contains dozens of overlapping animal voices. Manual decoding — that’s weeks of hellish work. Perch 2.0 analyzes this data instantly, identifying critically important signals about ecosystem health.
No billion parameters, no complex self-supervised learning. Just brilliantly optimized architecture with 3 specialized heads. Classification of 15,000 species, prototypical for semantic logits and recording source prediction.
Take 5 seconds of audio — get a vector for searching similar recordings, clustering sounds or training a classifier for new species. Works without GPU, without fine-tuning. Simply fixed embeddings of highest quality.
Real results are impressive. In Australia, they discovered a new population of the disappearing Plains Wanderer using Perch. In Hawaii, the model recognized trills of the rarest honeycreepers. Sound processing accelerated 50 times. And this is critically important for saving species on the brink of extinction!
Birds, mammals, amphibians, underwater coral reef scenes, anthropogenic noise — the model adapts to any conditions. Excellent transferability to marine data with whales and dolphins that were barely in training!
DeepMind proved — quality data labeling, simple architecture and clear problem formulation are more important than endless parameters.