AI Security at Risk Over $140M in TVL Exposed to Hidden Threats


In Brief
AI security is at risk as research reveals major vulnerabilities in financial AI agents, exposing over $140M in TVL to hidden threats through context manipulation attacks.

Sentient, the Open AGI Foundation, and Princeton University completed recent research that highlighted serious security flaws in AI agent frameworks. These flaws expose AI systems that manage financial transactions to exploitation, possibly placing over $140 million in Total Value Locked (TVL) at risk.
The study shows that attackers may control AI agents by inserting malicious data, allowing illegal transactions, and causing undesired behaviors. This study demonstrates how AI-powered financial management systems, which were developed for efficiency, may become great targets for hackers owing to weak security measures.
Exploiting AI Agent Frameworks
The study’s major emphasis was the ElizaOS framework, originally known as ai16z. AI bots in this system manage enormous financial assets, some of which surpass $25 million. Researchers revealed how attackers can bypass typical security measures by modifying agents’ memory and tool history.
These kinds of attacks manipulate an agent’s context rather than its immediate prompts, making them more difficult to identify and avoid. Once compromised, these agents have the ability to make illicit transactions, spread malicious links on social media platforms like X and Discord, and behave in unpredictable ways.
An important finding from the study is the advent of “context manipulation attacks.” Unlike classic prompt-based attacks, these infiltrations do not require direct orders from the AI agent. Instead, attackers change the agent’s stored data, resulting in a deceptive historical context that impacts future decisions.
Even if a prompt looks secure, an agent may act on manipulated previous encounters, jeopardizing security. Attackers can also take advantage of the lack of cross-checking mechanisms in AI models, in which the system fails to verify if a requested action is within its set operational boundaries.
Weaknesses of Current Security Measures
Current security methods based on limiting prompts are ineffective against sophisticated attacks. Researchers discovered that directing an AI agent to “avoid unauthorized transactions” is insufficient since the robot’s decision-making is impacted by past context rather than current instructions. Multi-step and indirect assaults can get beyond these barriers, illustrating that security must be integrated at a deeper structural level rather than depending on surface-level limits.
The vulnerabilities found in ElizaOS are not isolated incidents. Many AI agent frameworks have similar flaws, as security duties are frequently assigned to developers rather than being included in the main system. Existing safety technologies are vulnerable to modern manipulation methods, necessitating the rapid implementation of fundamental security enhancements.
If these vulnerabilities are not addressed, financial AI agents on numerous platforms may remain vulnerable to abuse, resulting in financial losses and brand damage. Companies that use these frameworks may face regulatory attention if their AI-powered financial systems are hacked, worsening the dangers of insufficient security measures.
Building Secure AI Systems
Researchers recommend a shift in security policy, pushing for a more thorough integration of safety measures at the model level. Sentient is developing solutions such as the Dobby-Fi model, which is supposed to serve as a personal auditor. This approach encourages financial prudence by rejecting suspicious transactions and highlighting dangerous behavior.
Unlike previous methods that rely on external prompts, Dobby-Fi provides security through built-in value alignment. This strategy intends to eliminate dependency on external security fixes and mitigate vulnerabilities caused by human oversight by incorporating financial prudence directly into the AI’s design.
Beyond enhancing individual models, developing safe AI agent frameworks is crucial. The Sentient Builder Enclave provides an architecture for developers to build agents with security as the foundation. Organizations can reduce the dangers of unauthorized decision-making and financial misconduct by embedding strong security features directly into agent designs. A safe AI system must not only identify but also actively resist future manipulation efforts, which necessitates continual monitoring and reinforcement learning to adapt to evolving threats.
AI agents play an increasingly important role in financial institutions, and safeguarding these frameworks must become a primary concern. The findings highlight the critical need for models that are fundamentally aligned with security best practices rather than depending on external protections.
With proactive development and the use of safe frameworks, the AI community can create robust systems that protect financial assets from sophisticated cyber attacks. Companies engaging in AI-powered financial management should emphasize security at the very beginning, ensuring that trust and dependability remain key to their operations.
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Victoria is a writer on a variety of technology topics including Web3.0, AI and cryptocurrencies. Her extensive experience allows her to write insightful articles for the wider audience.
More articles

Victoria is a writer on a variety of technology topics including Web3.0, AI and cryptocurrencies. Her extensive experience allows her to write insightful articles for the wider audience.