Google released Gemini 3 with context window of 1 million tokens
Google released Gemini 3. And the main change is in how the model processes requests. Gemini 3 analyzes context and intentions without long prompts.
The context window was expanded to 1 million tokens. Hours of video lectures or dozens of scientific papers fit in there. Want to understand nuclear fusion physics? It will write code for detailed visualization of plasma flows in a tokamak. Need to preserve family recipes? It will transcribe handwritten notes in any language and compile a full cookbook.
The model demonstrates PhD-level knowledge in natural sciences and mathematics, best results in coding and autonomous agent tasks. It’s truly multimodal. Understands excellently not only text but also images or videos.
For particularly complex tasks there’s Deep Think mode. On ARC-AGI-2 the model reached 45%, which indicates ability to solve unfamiliar tasks.
In Search, AI Mode appeared based on Gemini 3. For queries the search engine generates interactive visualizations and simulations instead of ordinary text answers.
Gemini 3 Pro displaced Grok 4.1 from xAI Elon Musk from the pedestal. The latter didn’t hold leadership even for a day. Particularly strong growth in benchmarks for abstract thinking and solving complex tasks and “Humanity’s Last Exam”.
Interestingly, developers note that the model doesn’t flatter and doesn’t try to please the user. It turns out that Google not only improved technical characteristics but also changed the very philosophy of interaction. The model says what it considers right. And not what the person wants to hear.