Don’t Trust Your Eyes: How Deepfakes Are Redefining Crypto Scams


In Brief
Generative AI-driven deepfake scams surged 456% from 2024 to 2025, fueling sophisticated cryptocurrency fraud that exploits trust with near-perfect fake texts, videos, and voices, making detection and prevention increasingly critical.

According to TRM Labs’ Chainabuse platform, incidents involving generative artificial intelligence (genAI) tools rose by a staggering 456% between May 2024 and April 2025, compared to the previous year—which had already experienced a 78% jump from 2022-23. These statistics point to a dramatic shift in how bad actors exploit cutting-edge technology to commit fraud.
GenAI tools can now create near-perfect human text, visuals, audio, and even live video. Scammers are leveraging this capability at scale, producing everything from deepfake celebrity endorsements to AI-generated phishing calls. In this feature, we dive into the major trends, methods, and real-world cases shaping the alarming intersection of AI deepfakes and cryptocurrency fraud.
Deepfakes Accounted for 40% of Crypto Scams in 2024
In 2024, deepfake technology was responsible for 40% of all high-value crypto frauds, according to a Bitget report co-authored with Slowmist and Elliptic. That same year, the crypto industry saw $4.6 billion vanish to scams—a 24% increase from the prior year.
Bitget’s report described this new landscape as one where “scams exploit trust and psychology as much as they do technology.” The findings suggest that social engineering, AI deception, and fake project fronts have collectively ushered crypto fraud into an entirely new era.
The Elon Musk
One recurring deepfake tactic involved impersonations of high-profile figures, such as Elon Musk. Scammers used realistic videos of Musk to pitch fraudulent investments or fake giveaways. These visuals were convincing enough to fool seasoned investors and regular users alike.
Deepfakes can be used to evade know-your-customer (KYC) protocols, impersonate leadership in scam projects, and the manipulation of Zoom meetings. Some scammers impersonate journalists or executives to lure victims into video calls and obtain sensitive information like passwords or crypto keys.
Old Scams, New Faces
While the Elon Musk deepfake scam first gained notoriety in 2022, its evolution is indicative of a broader trend: AI now makes familiar frauds harder to spot. Even government figures have taken notice. In March 2025, the U.S. passed the bipartisan Take It Down Act to protect victims of deepfake pornography—a milestone in AI policy, though deepfakes used in scams remain largely unregulated.
The prevalence of AI deepfakes extends far beyond American borders. In October 2024, Hong Kong authorities shut down a deepfake-driven romance scam that had conned victims into investing in fraudulent crypto schemes. AI-generated avatars created fake emotional bonds with victims before luring them into high-risk “investment” opportunities.
Social Media Flooded with Fake Endorsements
AI is also enabling a surge in disinformation across social platforms. Bots armed with genAI technology flood timelines with fake product endorsements and coordinated narratives around specific tokens. These bots, designed to sound like real people or influencers, create a sense of credibility and urgency, pushing unsuspecting users into scam tokens or pump-and-dump schemes.
The rise of AI-powered customer support scams adds another layer. Sophisticated AI chatbots now pose as support agents from legitimate crypto exchanges or wallets. Their conversations are eerily human, tricking users into giving up sensitive details like private keys or login credentials.
In May 2025, actor Jamie Lee Curtis publicly criticized Meta CEO Mark Zuckerberg after discovering a deepfake ad featuring her likeness used to promote an unauthorized product. The incident underscored how easily AI can exploit public trust and manipulate reputations.
Bitget CEO Gracy Chen summed it up aptly: “The biggest threat to crypto today isn’t volatility—it’s deception.”
Second and Third Most Dangerous: Social Engineering and Ponzi Scams
While deepfakes took the top spot in Bitget’s list of threats, social engineering and digital Ponzi schemes weren’t far behind.
Social engineering, described as “low-tech but highly effective,” relies on psychological manipulation. One common scam, the pig butcher scheme, involves scammers forming relationships—often romantic—to build trust before stealing funds.
Meanwhile, traditional Ponzi scams have undergone a “digital evolution.” They’re now cloaked in trendy concepts like DeFi, NFTs, and GameFi. Victims are promised lucrative returns through liquidity mining or staking platforms, but these setups are fundamentally unchanged: “new money fills old holes.”
Some Ponzi schemes have even gamified their platforms, creating engaging user interfaces and using deepfakes to mimic celebrity endorsements. Messaging apps and livestreams are used to propagate these scams, encouraging participants to recruit new victims—a tactic Bitget calls “social fission.”
“Don’t Trust Your Eyes”
Bitget’s report captured the unsettling shift: five years ago, fraud prevention meant avoiding suspicious links. Today, the advice is: “don’t trust your own eyes.”
AI tools are becoming extraordinarily powerful, and as a result, the distinction between real and fake is becoming less defined. This is a clear challenge for consumers, and regulators, who are now facing an opponent with the ability to fabricate complete identities and stories in an unfathomably short time period with a high degree of accuracy.
Despite these challenges, Bitget’s Chen remains optimistic. She emphasized that the crypto space isn’t helpless: “We’re seeing a lot of work being done on deepfake detection, and the industry is collaborating more than ever to share intelligence and spread awareness.”
How to Spot AI-Powered Crypto Scams
In contrast to past scams that often featured spelling or grammatical errors, AI driven fraud is polished, personalized, and mostly free of typos or broken links. Recognizing these scams will require a more sophisticated approach:
- Tone Matching: AI-produced messages can now replicate the language, tone, and cadence of actual influencers or executives, making them nearly indistinguishable from true communications.
- Video Tells: In deepfake videos, look for small inconsistencies like poor lip-syncing or unnatural blinking, especially during rapid movement.
- Audio Cues: Be wary of voice deepfakes that have odd pauses or tonal mismatches, as they can betray their artificiality.
- Cross-Verification: As with all financial endorsements, do not take them at face value. Validate them through verified sources, like the official social media profiles or websites of the individual or brand.
How to Stay Safe in an AI-Driven Threat Landscape
Surviving in this new world requires more than skepticism—it calls for active vigilance and layered security practices:
- Stay Informed: Know what scammers are doing and how AI tools can be manipulated. Awareness is still the best initial protection.
- Verify Everything: Regardless of unsolicited financial advice or endorsements, be suspicious. Verify everything through the true source.
- Use Detection Tools: Employ deepfake detection technologies that can flag manipulated audio or video. Look for glitches in speech patterns or facial expressions.
- Secure Your Wallet: Use 2FA and don’t share keys or logins, even with what allegedly is “customer support.”
- Leverage Blockchain Tools: Security companies are developing AI-assisted platforms that monitor scam trends across blockchain transactions. Using fraudulent parameters allows for potential identification of a scam before it succeeds.
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.
More articles

Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.