Post Thumbnail

“Cannot help with answers about elections and political figures”: Gemini

Technology giant Google continues to limit the capabilities of its AI assistant Gemini in the area of political discourse, despite the fact that the company’s main competitors, including OpenAI, Anthropic, and Meta, have already adapted their chatbots to discuss politically sensitive topics in recent months.

According to testing conducted by TechCrunch, when attempting to get answers to certain political questions, Google’s AI assistant Gemini often responds that it “cannot help with answers about elections and political figures at this time.” Meanwhile, other chatbots, including Claude from Anthropic, Meta AI from Meta, and ChatGPT from OpenAI, consistently answered the same questions, demonstrating a fundamentally different approach to political information.

In March 2024, Google announced that Gemini would not respond to election-related queries ahead of electoral campaigns in the US, India, and other countries. Many AI companies adopted similar temporary restrictions, fearing negative reactions if their chatbots made mistakes in a political context.

However, Google is now beginning to stand out among competitors with its conservative position. Major elections from last year have already taken place, but the company has not publicly announced plans to change Gemini’s approach to processing political topics. A Google representative declined to answer TechCrunch’s questions about whether the company has updated its policy regarding Gemini’s political discourse.

It is only obvious that Gemini sometimes struggles—or directly refuses—to provide factual political information. As of Monday morning, according to TechCrunch testing, Gemini avoided answering questions about who the current US president and vice president are.

In one instance during TechCrunch testing, Gemini referred to Donald J. Trump as a “former president,” and then refused to answer a follow-up question. A Google representative explained that the chatbot was confused by Trump’s inconsistent terms in office and that the company is working to fix this error.

Experts note that Google’s caution may be related to the growing risks of disinformation and potential reputational consequences in case of generating incorrect political information. However, in conditions where competitors are actively developing their AI systems’ abilities to process political content, maintaining strict limitations may put Gemini at a disadvantage in the AI assistant market.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.
Latest News
Father of reinforcement learning predicted end of large language models era

Richard Sutton - this is one of the fathers of reinforcement learning and Turing Award laureate. So he stated that the era of large language models is coming to an end. Next, in his opinion, comes the era of experience. And here's why he thinks so.

Artificial intelligence detects ADHD without questionnaires and doctors

Imagine you could diagnose ADHD simply by how your brain processes flickering letters on a screen. No questionnaires, no months of waiting for an appointment with a specialist. AI looks at your visual rhythms and gives a verdict with 92% accuracy. Sounds like science fiction? But this is already reality.

OpenAI embarrassed themselves twice by passing off old solutions as GPT-5 discoveries

OpenAI managed to embarrass themselves twice on the same rake. And the second attempt came out even more embarrassing than the first.

IMF chief economist compared AI boom to dotcom bubble

IMF chief economist Pierre-Olivier Gourinchas stated that the world has already traveled halfway to a burst AI bubble and a new financial crisis.

Researchers cracked 12 AI protection systems

You know what researchers from OpenAI, Anthropic, Google DeepMind and Harvard just found out? They tried to break popular AI security systems and found a bypass almost everywhere. They checked 12 common protection approaches. From smart system prompt formulations to external filters that should catch dangerous queries.