News Report Technology
December 12, 2024

Google Unveils Gemini 2.0 Flash AI Model, Now Accessible To Developers

In Brief

Google launches Gemini 2.0 Flash, the latest experimental AI model in its Gemini family, featuring enhanced performance, multimodal input and output capabilities, and improved features for developers.

Google Unveils Gemini 2.0 Flash AI Model, Now Accessible To Developers

Technology company Google announced the launch of Gemini 2.0, the latest AI model in its Gemini family, starting with an experimental version called Gemini 2.0 Flash. 

Building on the success of Gemini 1.5 Flash, which became a favorite among developers, Gemini 2.0 Flash delivers improved performance while maintaining fast response times. Notably, the new model surpasses the 1.5 Pro in key benchmarks at twice the speed. Additionally, Gemini 2.0 Flash introduces expanded capabilities, including support for multimodal inputs such as images, videos, and audio, as well as multimodal outputs like text paired with AI-generated images and steerable multilingual text-to-speech (TTS) audio. This AI model can also natively call tools such as Google Search, perform code execution, and access user-defined third-party functions.

Currently available to developers through the Gemini API in Google AI Studio and Vertex AI, the experimental version of 2.0 Flash supports multimodal input with text output. Advanced features like text-to-speech and native image generation are accessible to early-access partners, with broader availability expected in January alongside additional model sizes.

To further support developers in creating dynamic, interactive applications, Google is also introducing a new Multimodal Live Application Programming Interface (API). This API allows real-time audio and video-streaming input, along with the capability to integrate multiple tools for combined functionality.

Starting today, users worldwide can try an experimental chat-optimized version of Gemini 2.0 Flash by selecting it from the model drop-down on desktop and mobile web platforms. The model will also be available on the Gemini mobile application in the near future.

Google Explores Gemini 2.0 Flash’s Capabilities Through Research Projects

Gemini 2.0 Flash introduces advanced capabilities that enhance user interactions, including multimodal reasoning, long-context understanding, complex instruction handling, planning, compositional function-calling, and seamless integration with native tools. These features, combined with improved latency, work together to create a foundation for a new generation of autonomous AI experiences

Presently, Google is researching how AI agents can assist people with real-world tasks through prototypes designed to enhance productivity and streamline workflows. Examples include the updated Project Astra, a research initiative focused on the potential capabilities of a universal AI assistant, the new Project Mariner, which reimagines human-agent interactions, beginning with browser-based experiences, and Jules, an AI-driven coding assistant created to support developers in their work. By utilizing Gemini 2.0 Flash in these projects, Google was able to effectively evaluate its capabilities and achieve enhanced outcomes, highlighting the vast potential of the new model.  

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

More articles
Alisa Davidson
Alisa Davidson

Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

The Calm Before The Solana Storm: What Charts, Whales, And On-Chain Signals Are Saying Now

Solana has demonstrated strong performance, driven by increasing adoption, institutional interest, and key partnerships, while facing potential ...

Know More

Crypto In April 2025: Key Trends, Shifts, And What Comes Next

In April 2025, the crypto space focused on strengthening core infrastructure, with Ethereum preparing for the Pectra ...

Know More
Read More
Read more
Crypto’s Crossroads: What’s Next for The Market? The Hottest Insights From Industry Opinion Leaders
News Report Technology
Crypto’s Crossroads: What’s Next for The Market? The Hottest Insights From Industry Opinion Leaders
May 5, 2025
Hack Seasons Conference Concludes In Dubai: A Landmark Event Uniting Web3 Pioneers To Explore The Future Of DeFi, AI, And Blockchain Innovation
Hack Seasons Business Lifestyle Markets News Report Technology
Hack Seasons Conference Concludes In Dubai: A Landmark Event Uniting Web3 Pioneers To Explore The Future Of DeFi, AI, And Blockchain Innovation
May 5, 2025
This Week In Crypto: BTC Stable, ETH Shows Signs Of Life, TON Lags The Pack
News Report Technology
This Week In Crypto: BTC Stable, ETH Shows Signs Of Life, TON Lags The Pack
May 5, 2025
The Calm Before The Solana Storm: What Charts, Whales, And On-Chain Signals Are Saying Now
Analysis Markets News Report Technology
The Calm Before The Solana Storm: What Charts, Whales, And On-Chain Signals Are Saying Now
May 5, 2025