News Report Technology
July 23, 2025

Google’s Advanced Gemini Model Powered By Deep Think Hits Gold At International Math Olympiad With Human-Level Problem Solving

In Brief

An advanced version of Google DeepMind’s Gemini AI model achieved gold-medal-level performance at the International Mathematical Olympiad by solving five of six problems—marking a major AI milestone in human-level mathematical reasoning.

Advanced Gemini Model With Deep Think Achieves Gold-Medal Standard At International Mathematical Olympiad

The artificial intelligence division of Google, Google DeepMind announced that an advanced version of its Gemini Deep Think model successfully solved five out of six problems at the International Mathematical Olympiad (IMO), earning 35 points—equivalent to a gold-medal-level performance. This marked one of the first instances in which IMO coordinators officially evaluated and certified a model’s results using the same standards applied to human participants.

The Gemini Deep Think system used for this demonstration featured enhanced reasoning capabilities tailored for complex mathematical problems. It incorporated recent research developments, including a method known as parallel thinking, which allows the model to explore and integrate multiple solution paths simultaneously before arriving at a final answer, rather than following a single linear process.

To improve its performance, the model was trained using reinforcement learning techniques designed to enhance multi-step reasoning, theorem proving, and general problem-solving. The system also received access to a curated set of high-quality mathematical solutions, along with instructional guidance on approaching IMO-style questions.

A limited version of this Deep Think model will be shared with selected testers, including mathematicians, ahead of a broader release to Google AI Ultra subscribers.

This development represents a significant step beyond last year’s achievements. In 2024, models like AlphaGeometry and AlphaProof required human intervention to translate problems into domain-specific languages (such as Lean) and back again. Additionally, solving the problems took several days of computation. By contrast, the updated Gemini model produced mathematically rigorous solutions directly from the official IMO problem statements, entirely in natural language, and within the standard 4.5-hour competition timeframe.

IMO Becomes Key Benchmark For AI In Advanced Mathematical Reasoning

The IMO is a longstanding global competition that brings together top-performing pre-university students to tackle six advanced mathematical problems across topics such as algebra, combinatorics, geometry, and number theory. Established in 1959, the IMO is widely regarded as one of the most challenging math contests worldwide. Each participating country fields a team of six students, and medals are awarded to the top 50% of contestants, with around 8% earning a gold medal.

In recent years, the competition has also emerged as a benchmark for evaluating the capabilities of artificial intelligence in complex problem-solving and mathematical reasoning. In 2024, a combined system from Google DeepMind—AlphaProof and AlphaGeometry 2—achieved a silver-level performance by solving four of the six problems and earning 28 points. This result, which relied on formal mathematical languages, marked a notable step forward in demonstrating AI’s potential to match advanced human mathematical skills.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

More articles
Alisa Davidson
Alisa Davidson

Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

Hot Stories
Join Our Newsletter.
Latest News

The Calm Before The Solana Storm: What Charts, Whales, And On-Chain Signals Are Saying Now

Solana has demonstrated strong performance, driven by increasing adoption, institutional interest, and key partnerships, while facing potential ...

Know More

Crypto In April 2025: Key Trends, Shifts, And What Comes Next

In April 2025, the crypto space focused on strengthening core infrastructure, with Ethereum preparing for the Pectra ...

Know More
Read More
Read more
Hedra Launches Live Avatars With LiveKit, Delivering Real-Time AI Characters At Ultra-Low Latency And Cost
News Report Technology
Hedra Launches Live Avatars With LiveKit, Delivering Real-Time AI Characters At Ultra-Low Latency And Cost
July 23, 2025
ListaDAO’s Vision for Everyday DeFi: Simpler, Safer, and Ready for Everyone
Hack Seasons Interview Business Markets
ListaDAO’s Vision for Everyday DeFi: Simpler, Safer, and Ready for Everyone
July 23, 2025
Linera’s Real-Time Web3 Vision: Scaling the Internet of Blockchains
Hack Seasons Interview Business Markets
Linera’s Real-Time Web3 Vision: Scaling the Internet of Blockchains
July 23, 2025
Sonic’s Quiet Disruption of Web3: Incentivizing Builders, Not Validators
Hack Seasons Interview Business Markets
Sonic’s Quiet Disruption of Web3: Incentivizing Builders, Not Validators
July 23, 2025