How AI Agents Could Become Crypto’s Next Big Security Threat


In Brief
AI agents are quickly transforming crypto infrastructure with automation and smart decision-making, but their growing reliance on flexible protocols like MCP introduces serious security risks that could compromise digital assets.

As decentralized finance (DeFi), trading bots, and smart wallets evolve, they’re increasingly powered by AI agents and small autonomous systems that rely on protocols like Model Context Protocol (MCP) to operate. While MCP-driven agents promise automated smart decisions, they also introduce vulnerabilities that could jeopardize crypto assets.
The Rise of AI Agents in Crypto
Over the past year, AI agents have penetrated deeper into crypto infrastructure—automating tasks in wallets, executing trades, parsing on‑chain data, and interacting with smart contracts. A recent VanEck estimate suggests there were over 10,000 such agents in crypto by late 2024, with projections ballooning to more than 1 million in 2025.
Central to their operation is MCP, a framework that functions like a decision layer, handling which tools to use, what code to execute, and how to respond to inputs. Unlike a smart contract’s rigid logic (“what should happen”), MCP governs the AI agent’s behavior (“how it happens”).
But while this flexibility empowers agents, it also dramatically expands the attack surface—with potentially devastating consequences.
How Plugins Can Weaponize AI Agents
Plugins fuel AI agents. These software modules extend their capabilities—enabling everything from fetching market data to executing transactions. However, each plugin also introduces a vulnerability. Blockchain security firm SlowMist has identified four primary attack vectors that exploit MCP-based agents:
Data Poisoning
Crafted inputs trick the agent into following misleading instructions, embedding malicious logic into its decision-making flow.
JSON Injection
Rogue JSON endpoints can sneak in unsafe code or data, sidestepping validation and leaking sensitive information.
Function Override
Attackers replace or override legitimate agent operations, masking malicious actions and disabling essential controls.
Cross‑MCP Call
An agent lured into communicating with untrusted services—via error messages or deceptive prompts—can be used to spread further compromise.
Crucially, these vectors target the agent’s runtime, not the foundational LLM model. They hijack behavior during usage, not training.
Why These Threats Are More Severe Than Model Poisoning
Unlike poisoning traditional AI models, where corrupt training data influences the internal weights, attacks on AI agents manipulate them in action. “Threat level and privilege scope are higher,” notes SlowMist co‑founder “Monster Z,” because runtime access often includes permission to move private keys or assets.
One audit even flagged a plugin flaw that risked exposing private keys—possibly enabling total asset takeover.
The Stakes: Real Crypto Risk
When AI agent ecosystems are live in wallets and exchanges, consequences escalate quickly. Malicious or compromised agents might:
- Redeem permissions beyond their scope
- Steal or expose private keys
- Trigger unauthorized transactions
- Spread infections to interconnected systems via chained MCP calls
Guy Itzhaki (CEO of Fhenix) warns that plugins act as hidden execution paths, often lacking sandbox protections. They open the door to privilege escalation, functional overrides, and silent leaks.
Patching Security: Build It Right from the Ground Up
Crypto’s fast-pace culture (“move fast, break things”) clashes with the requirement for airtight security. Lisa Loud of the Secret Foundation stresses: security can’t be postponed. “You have to build security first and everything else second”.
SlowMist recommends these best practices:
- Strict plugin validation: Check authenticity and integrity before loading.
- Input sanitization: Clean all data from external sources.
- Least privilege: Plugins get only the access they absolutely need.
- Behavior auditing: Continuously monitor agent actions for anomalies.
Though these measures may be time-consuming, they provide essential protection in a high-stakes crypto environment.
Insights from Academic Research
Independent studies echo rising alarms. A March 2025 ArXiv paper (“AI Agents in Cryptoland”) exposes vulnerabilities in contextual prompts and mutable memory modules—demonstrating how adversaries can subtly influence agents to perform unauthorized asset transfers or violate protocol conditions.
Another February study highlights that web-based AI agents outperform static LLMs in automation—but at the cost of being more attackable. Sequential decision-making and dynamic inputs multiply their exposure.
These findings underscore that agents aren’t just extensions of LLMs—they add layers of complexity and risk.
Lessons from Real-World DeFi
AI agents are already active in DeFi. According to Sean Li of Magic Labs, although they drive everything from 24/7 trading to yield management, the underlying wallets and infrastructure haven’t kept pace.
Historic examples include:
- Banana Gun bot (Sept 2024): A Telegram-based trading agent fell prey to an oracle exploit where users lost 563 ETH (~$1.9 million).
- Aixbt dashboard breach: Unauthorized commands transferred 55.5 ETH (~$100,000) out of user wallets.
These cases highlight how vulnerabilities in agent infrastructure—or even auxiliary components—can lead to heavy losses.
Emerging Solutions: Programmable Wallets & Permissioned Agents
To safely scale AI automation, wallets must evolve beyond static transaction signing. As Sean Li suggests, programmable, composable, and auditable infrastructure is crucial. That includes:
- Intent-aware sessions: Granting agents permission only for specific tasks, time frames, or assets.
- Cryptographic validation: Every agent action is signed and verifiable.
- Real-time revocation: Users can terminate agent permissions instantly.
- Unified cross-chain frameworks: Permissions and identity that travel across protocols.
Such a foundation ensures agents operate as controlled assistants, not unchecked actors.
Toward a Secure AI-Crypto Ecosystem
To harness AI’s power in crypto, the ecosystem must adopt a “security-first” ethos. That means:
- Integrating hardened protocols into wallets and agents
- Releasing agent platforms only after thorough security vetting
- Aligning developer incentives with secure practices
- Incorporating advanced trust mechanisms before granting agents access to assets
Top-down support—from core teams, auditors, and standards bodies—is critical to drive adoption and scrutiny of agent security frameworks.
What Lies Ahead?
AI agents promise a revolution in crypto—real-time trading, intelligent on‑chain interactions, and deeper personalization. However, the same foundations that allow these capabilities also amplify risk.
The attack vectors aren’t theoretical—they’re real, well-understood, and increasingly sophisticated. Without thorough security integration into protocols, we risk turning powerful tools into gateways for catastrophic breaches.
Already, MCP brings both opportunity and danger. Academic studies and real-world incidents show that even slight missteps in plugin or protocol design can open Pandora’s box.
But there is a path forward. By building security and permissioning into wallets, plugins, and agents from day one—and layering in continuous monitoring and crypto-native safeguards—we can unlock AI’s promise without sacrificing crypto’s core principle: trustless, user-controlled finance.
AI Agents: Preparing for the Future
The rapid adoption of AI agents in the crypto world is a double-edged sword. With the number of agents projected to exceed 1 million in 2025 and with vulnerabilities already identified, it’s clear we’re at a turning point.
Unchecked automation puts assets at direct risk unless security becomes as foundational as the smart contracts themselves. By prioritizing secure plugin vetting, intent tracing, and least‑privilege access, developers can ensure AI agents serve user sovereignty, not undermine it.
The time to act is now: secure these systems before the next generation of agents becomes tomorrow’s headline breach.
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.
More articles

Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.