News Report Technology
January 28, 2026

Google Unveils Agentic Vision In Gemini 3 Flash, Combining Visual Reasoning With Code Execution

In Brief

Google has introduced Agentic Vision in Gemini 3 Flash, enabling the model to combine visual reasoning with code execution for interactive, evidence-based image analysis.

Google Unveils Agentic Vision In Gemini 3 Flash, Combining Visual Reasoning With Code Execution

Technology company Google unveiled the Agentic Vision feature in Gemini 3 Flash, a tool designed to integrate visual reasoning with code execution, allowing the model to base its responses on visual evidence.

The Agentic Vision system transforms image analysis from a static interpretation into an active, investigative process. By combining visual reasoning with executable code, the model can develop step-by-step plans to examine and manipulate images, such as zooming in, cropping, rotating, annotating, or performing calculations, with the goal of grounding answers directly in visual data.

Incorporating code execution within Gemini 3 Flash has been shown to improve performance across most vision benchmarks by 5–10%, offering a measurable enhancement in image understanding tasks.

The feature operates through a structured Think, Act, Observe loop. During the Think phase, the model evaluates the user query alongside the initial image and formulates a multi-step plan. In the Act phase, it generates and executes Python code to manipulate or analyze the image. Finally, in the Observe phase, the modified image is added to the model’s context window, allowing the system to reassess the visual information before producing a final response.

By enabling code execution through its API, Gemini 3 Flash unlocks a range of advanced behaviors, many of which are showcased in the demo application available on Google AI Studio. Developers, from major platforms like the Gemini app to smaller startups, have begun leveraging this functionality to support diverse use cases in image analysis, annotation, and visual computation.

One application involves detailed inspection of images. Gemini 3 Flash can automatically zoom in on fine-grained features, allowing iterative analysis of high-resolution inputs. For instance, PlanCheckSolver.com, an AI-driven building plan validation platform, reported a 5% increase in accuracy by using code execution to examine specific sections of architectural plans, such as roof edges or building layouts. The model generates Python code to crop and analyze these areas and reintegrates them into its context window, grounding its conclusions in precise visual evidence.

Another use case is image annotation. Agentic Vision enables the model to interact with visual content by drawing directly on images. In tasks such as counting digits on a hand, the model can overlay bounding boxes and numeric labels on each detected finger, creating a “visual scratchpad” that ensures its reasoning is fully aligned with the observed pixels.

The system also supports visual mathematics and data visualization. Gemini 3 Flash can extract data from dense tables and execute Python code to generate charts or perform calculations. Unlike standard language models that may produce errors in multi-step arithmetic, Gemini 3 Flash executes deterministic Python code to normalize data and produce accurate visual outputs, such as professional Matplotlib bar charts, replacing probabilistic guesses with verifiable results.

Agentic Vision: New Tools, Broader Access, And API Availability

Google is continuing to expand the capabilities of Agentic Vision in Gemini 3 Flash. Currently, the model is able to determine when to zoom in on fine details automatically, though other functions, such as rotating images or performing visual computations, still require explicit prompts. Future updates aim to make these behaviors fully implicit.

The company is also exploring the addition of new tools for Gemini models, including web and reverse image search, to further enhance the system’s ability to ground its responses in real-world information. Plans are underway to extend Agentic Vision to additional model sizes beyond the Flash variant, broadening access to the technology.

Agentic Vision is now available through the Gemini API in Google AI Studio and Vertex AI, and it is gradually rolling out in the Gemini application, where users can access it by selecting “Thinking” from the model drop-down. Developers can experiment with the functionality using the demo in Google AI Studio or by enabling “Code Execution” in the AI Studio Playground.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

More articles
Alisa Davidson
Alisa Davidson

Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

Hot Stories

7 Asset Classes Proving Tokenization Isn’t Just Hype

by Alisa Davidson
January 28, 2026
Join Our Newsletter.
Latest News

7 Asset Classes Proving Tokenization Isn’t Just Hype

by Alisa Davidson
January 28, 2026

The Calm Before The Solana Storm: What Charts, Whales, And On-Chain Signals Are Saying Now

Solana has demonstrated strong performance, driven by increasing adoption, institutional interest, and key partnerships, while facing potential ...

Know More

Crypto In April 2025: Key Trends, Shifts, And What Comes Next

In April 2025, the crypto space focused on strengthening core infrastructure, with Ethereum preparing for the Pectra ...

Know More
Read More
Read more
7 Asset Classes Proving Tokenization Isn’t Just Hype
Top Lists News Report Technology
7 Asset Classes Proving Tokenization Isn’t Just Hype
January 28, 2026
StraitsX Powers Seamless USD Access For OSL Pay Platform
News Report Technology
StraitsX Powers Seamless USD Access For OSL Pay Platform
January 28, 2026
Tether Unveils USA₮: Federally Regulated, Dollar-Backed Stablecoin For US Market
News Report Technology
Tether Unveils USA₮: Federally Regulated, Dollar-Backed Stablecoin For US Market
January 27, 2026
The Adolescence Of AI: Anthropic CEO Shares Perspective On Civilizational Risks And Fast Technological Change
News Report Technology
The Adolescence Of AI: Anthropic CEO Shares Perspective On Civilizational Risks And Fast Technological Change
January 27, 2026