
Technology giant Google continues to limit the capabilities of its AI assistant Gemini in the area of political discourse, despite the fact that the company’s main competitors, including OpenAI, Anthropic, and Meta, have already adapted their chatbots to discuss politically sensitive topics in recent months.
According to testing conducted by TechCrunch, when attempting to get answers to certain political questions, Google’s AI assistant Gemini often responds that it “cannot help with answers about elections and political figures at this time.” Meanwhile, other chatbots, including Claude from Anthropic, Meta AI from Meta, and ChatGPT from OpenAI, consistently answered the same questions, demonstrating a fundamentally different approach to political information.
In March 2024, Google announced that Gemini would not respond to election-related queries ahead of electoral campaigns in the US, India, and other countries. Many AI companies adopted similar temporary restrictions, fearing negative reactions if their chatbots made mistakes in a political context.
However, Google is now beginning to stand out among competitors with its conservative position. Major elections from last year have already taken place, but the company has not publicly announced plans to change Gemini’s approach to processing political topics. A Google representative declined to answer TechCrunch’s questions about whether the company has updated its policy regarding Gemini’s political discourse.
It is only obvious that Gemini sometimes struggles—or directly refuses—to provide factual political information. As of Monday morning, according to TechCrunch testing, Gemini avoided answering questions about who the current US president and vice president are.
In one instance during TechCrunch testing, Gemini referred to Donald J. Trump as a “former president,” and then refused to answer a follow-up question. A Google representative explained that the chatbot was confused by Trump’s inconsistent terms in office and that the company is working to fix this error.
Experts note that Google’s caution may be related to the growing risks of disinformation and potential reputational consequences in case of generating incorrect political information. However, in conditions where competitors are actively developing their AI systems’ abilities to process political content, maintaining strict limitations may put Gemini at a disadvantage in the AI assistant market.