
Salesforce test: Gemini 2.5 Pro solves only 58% of business tasks
The Salesforce CRMArena-Pro test shows that even leading artificial intelligence models face serious limitations when solving everyday business tasks.
Imagine: the flagship model Gemini 2.5 Pro successfully handles only 58% of requests with a single query. And what happens with multi-step dialogue? Efficiency rapidly drops to 35%!
CRMArena-Pro tests large language models under real conditions of sales, customer service and pricing. Researchers created 4280 unique tasks across 19 types of business operations using synthetic Salesforce data.
Particularly revealing are the results in multi-step dialogues — a key element of any business interaction. Almost half of Gemini 2.5 Pro’s failed attempts are related to inability to request critically important information. Models that ask more clarifying questions demonstrate significantly better results.
The highest performance was achieved in automating simple workflows — 83% success in routing support service requests. However, tasks requiring deep text understanding or following complex rules remain a serious challenge for modern artificial intelligence technologies.