Post Thumbnail

“Cannot help with answers about elections and political figures”: Gemini

Technology giant Google continues to limit the capabilities of its AI assistant Gemini in the area of political discourse, despite the fact that the company’s main competitors, including OpenAI, Anthropic, and Meta, have already adapted their chatbots to discuss politically sensitive topics in recent months.

According to testing conducted by TechCrunch, when attempting to get answers to certain political questions, Google’s AI assistant Gemini often responds that it “cannot help with answers about elections and political figures at this time.” Meanwhile, other chatbots, including Claude from Anthropic, Meta AI from Meta, and ChatGPT from OpenAI, consistently answered the same questions, demonstrating a fundamentally different approach to political information.

In March 2024, Google announced that Gemini would not respond to election-related queries ahead of electoral campaigns in the US, India, and other countries. Many AI companies adopted similar temporary restrictions, fearing negative reactions if their chatbots made mistakes in a political context.

However, Google is now beginning to stand out among competitors with its conservative position. Major elections from last year have already taken place, but the company has not publicly announced plans to change Gemini’s approach to processing political topics. A Google representative declined to answer TechCrunch’s questions about whether the company has updated its policy regarding Gemini’s political discourse.

It is only obvious that Gemini sometimes struggles—or directly refuses—to provide factual political information. As of Monday morning, according to TechCrunch testing, Gemini avoided answering questions about who the current US president and vice president are.

In one instance during TechCrunch testing, Gemini referred to Donald J. Trump as a “former president,” and then refused to answer a follow-up question. A Google representative explained that the chatbot was confused by Trump’s inconsistent terms in office and that the company is working to fix this error.

Experts note that Google’s caution may be related to the growing risks of disinformation and potential reputational consequences in case of generating incorrect political information. However, in conditions where competitors are actively developing their AI systems’ abilities to process political content, maintaining strict limitations may put Gemini at a disadvantage in the AI assistant market.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.

Latest News

OpenAI prepares first open model no weaker than O3 Mini

OpenAI company is preparing to release its first open language model. Will live up to its name, so to speak. This is a serious turn for the company that previously kept its powerful developments closed.

Grok 4 scored 57% in "The Last Exam" versus 22% for Gemini 2.5 Pro

Elon Musk presented a new version of his neural network – Grok 4. The maximum version – Grok 4 Heavy – can run multiple computations simultaneously and scores 57% in the most difficult test "The Last Exam of Humanity". For comparison, the previous leader Gemini 2.5 Pro showed only 22%.

Researchers found AI vulnerability through facts about cats

I was mildly surprised by this news. Do you know that an ordinary mention of cats can confuse the most advanced artificial intelligence models? Scientists discovered an amazing vulnerability in neural networks' thinking processes.

US IT companies fired 94,000 employees in six months due to AI

In the first half of 2025, American IT companies fired more than 94,000 technical specialists. This is not just cost-cutting. This is structural change under the influence of artificial intelligence.

OpenAI hired the first psychiatrist in the AI industry to study ChatGPT's impact on the psyche

OpenAI company announced that it hired a professional clinical psychiatrist with experience in forensic psychiatry. To research the impact of its artificial intelligence products on users' mental health.