Post Thumbnail

CodeClash showed huge gap between AI and human programmer

CodeClash was introduced. This is a new benchmark for evaluating programming skills in large language models. And it showed: the gap with human level is enormous.

The authors noticed a fundamental problem of current benchmarks. They’re tied to specific, clearly formulated tasks. Namely, fixing certain bugs or writing point tests. However, real programmers don’t spend all day solving isolated tasks.

Thus CodeClash emerged. A benchmark in which large language models compete in multi-round tournaments for creating the best codebase to achieve some goal. In this case using example of 6 games, but in general it can be anything where simulations can be made and quality measured. That is, not the model itself plays, but the code it writes and improves.

Each round proceeds in 2 phases: agents edit their code, then their codebases compete against each other. Winners are determined based on criteria of the specific game. Each round conducts 1000 games.

And then sad results begin. The gap with human level turned out to be significant. The authors took the top solution for one of the games called gigachad. The Claude Sonnet 4.5 model didn’t win a single one of 150 rounds against it. That’s 0 out of 37.5 thousand simulations. Meanwhile the human bot remained unchanged throughout all rounds, it wasn’t adapted.

It turns out that language models solve isolated tasks well. But when it comes to real code writing that must compete and improve – they lose to humans completely.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.
Latest News
Google discovered 3 viruses using AI to enhance attacks

Google discovered 3 new generation viruses that secretly connect to AI models to enhance attacks. This was reported by the Google Threat Intelligence Group division.

Microsoft discovered AI agent vulnerabilities to manipulation in simulation

Microsoft created a simulation environment for testing AI agents - and discovered unexpected weaknesses. The study, conducted jointly with the University of Arizona, showed that current agent models are vulnerable to manipulation.

CodeClash showed huge gap between AI and human programmer

CodeClash was introduced. This is a new benchmark for evaluating programming skills in large language models. And it showed: the gap with human level is enormous.

XPeng introduced world's first female humanoid robot

Chinese electric car manufacturer XPeng introduced the new generation humanoid robot IRON. And this is the first female humanoid!

Michael Burry bet 1.1 billion dollars against Nvidia and Palantir

Michael Burry - this is a legendary investor who predicted the 2008 mortgage crisis. And now he's making a loud move again. Michael bet 1.1 billion dollars in put options against 2 major companies from the AI sector. These are Nvidia and Palantir.