CodeClash showed huge gap between AI and human programmer
CodeClash was introduced. This is a new benchmark for evaluating programming skills in large language models. And it showed: the gap with human level is enormous.
The authors noticed a fundamental problem of current benchmarks. They’re tied to specific, clearly formulated tasks. Namely, fixing certain bugs or writing point tests. However, real programmers don’t spend all day solving isolated tasks.
Thus CodeClash emerged. A benchmark in which large language models compete in multi-round tournaments for creating the best codebase to achieve some goal. In this case using example of 6 games, but in general it can be anything where simulations can be made and quality measured. That is, not the model itself plays, but the code it writes and improves.
Each round proceeds in 2 phases: agents edit their code, then their codebases compete against each other. Winners are determined based on criteria of the specific game. Each round conducts 1000 games.
And then sad results begin. The gap with human level turned out to be significant. The authors took the top solution for one of the games called gigachad. The Claude Sonnet 4.5 model didn’t win a single one of 150 rounds against it. That’s 0 out of 37.5 thousand simulations. Meanwhile the human bot remained unchanged throughout all rounds, it wasn’t adapted.
It turns out that language models solve isolated tasks well. But when it comes to real code writing that must compete and improve – they lose to humans completely.