News Report Technology
July 19, 2023

AI Tool Era Has Arrived for Cybersecurity Exploits: WormGPT, PoisonGPT, DAN

In Brief

AI tools for exploits, cyberattacks, phishing attempts, and business email compromises (BEC) have gained attention for their potential risks.

It enables the creation of junk sites, rapid website creation, and the spread of manipulative news and disinformation.

AI is now emerging as a significant force in defining the next stage of the Internet’s evolution, which has gone through several phases. While the idea of Metaverse once attracted interest, the spotlight has now shifted to AI as ChatGPT plugins and AI-powered code generation for websites and applications are being quickly integrated into internet services.

WormGPT, a tool made recently for launching cyberattacks, phishing attempts, and business email compromises (BEC), has drawn attention to the less desirable applications of AI development.

AI Tool Era Has Arrived for Cybersecurity Threats and Exploits
Credit: Metaverse Post

Every third website appears to use AI-generated content in some capacity. Previously, marginalised individuals and Telegram channels would distribute lists of AI services for various occasions, similar to how news from various websites would be distributed. The dark web has now emerged as the new frontier for AI’s impact.

WormGPT represents a concerning development in this realm, providing cybercriminals with a powerful tool to exploit vulnerabilities. Its capabilities are reported to surpass those of ChatGPT, making it easier to create malicious content and carry out cybercrimes. The potential risks associated with WormGPT are evident, as it enables the generation of junk sites for search engine optimization (SEO) manipulation, the rapid creation of websites through AI website builders, and the spread of manipulative news and disinformation.

With AI-powered generators at their disposal, threat actors can devise sophisticated attacks, including new levels of adult content and activities on the dark web. These advancements highlight the need for robust cybersecurity measures and enhanced protective mechanisms to counter the potential misuse of AI technologies.

Earlier this year, an Israeli cybersecurity firm revealed how cybercriminals were circumventing ChatGPT’s restrictions by exploiting its API and engaging in activities such as trading stolen premium accounts and selling brute-force software to hack into ChatGPT accounts using large lists of email addresses and passwords.

The lack of ethical boundaries associated with WormGPT emphasizes the potential threats posed by generative AI. This tool allows even novice cybercriminals to launch attacks swiftly and on a large scale, without requiring extensive technical knowledge.

Adding to the concern, threat actors are promoting “jailbreaks” for ChatGPT, utilizing specialized prompts and inputs to manipulate the tool into generating outputs that may involve disclosing sensitive information, producing inappropriate content, or executing harmful code.

Generative AI, with its ability to create emails with impeccable grammar, presents a challenge in identifying suspicious content, as it can make malicious emails seem legitimate. This democratization of sophisticated BEC attacks means that attackers with limited skills can now leverage this technology, making it accessible to a wider range of cybercriminals.

With WormGPT, PoisonGPT, and DAN, cybercriminals can automate the creation of highly convincing fake emails tailored to individual recipients, significantly increasing the success rates of their attacks. This tool has been described as the “biggest enemy of the well-known ChatGPT” and boasts capabilities for illegal activities.

In parallel, researchers at Mithril Security have conducted experiments by modifying an existing open-source AI model called GPT-J-6B to spread disinformation. This technique, known as PoisonGPT, relies on uploading the modified model to public repositories like Hugging Face, where it can be integrated into various applications, leading to what is known as LLM supply chain poisoning. Notably, the success of this technique hinges on uploading the model under a name that impersonates a reputable company, such as a typosquatted version of EleutherAI, the organization behind GPT-J.

Read more related topics:

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

More articles
Damir Yalalov
Damir Yalalov

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

From Ripple to The Big Green DAO: How Cryptocurrency Projects Contribute to Charity

Let's explore initiatives harnessing the potential of digital currencies for charitable causes.

Know More

AlphaFold 3, Med-Gemini, and others: The Way AI Transforms Healthcare in 2024

AI manifests in various ways in healthcare, from uncovering new genetic correlations to empowering robotic surgical systems ...

Know More
Join Our Innovative Tech Community
Read More
Read more
Biswap Releases New Strategic Roadmap, Focuses On Multi-Chain Expansion And Introduction Of Liquid Staking
Markets News Report Technology
Biswap Releases New Strategic Roadmap, Focuses On Multi-Chain Expansion And Introduction Of Liquid Staking
May 24, 2024
Web3 AI Unveiled: Jimmy Zhao Unpacks How BNB Chain’s Integration of Blockchain and AI Redefines Trust, Transparency, and Decentralization
Interview Business Markets Software Technology
Web3 AI Unveiled: Jimmy Zhao Unpacks How BNB Chain’s Integration of Blockchain and AI Redefines Trust, Transparency, and Decentralization
May 24, 2024
Hyperliquid Unveils HIP-1 Spot Token Deployment Function On Its Mainnet, And Recommends Testing For Smooth Deployment
News Report Technology
Hyperliquid Unveils HIP-1 Spot Token Deployment Function On Its Mainnet, And Recommends Testing For Smooth Deployment
May 24, 2024
This Week’s Top Deals, Major Investments in AI, IT, Web3, and Crypto (20-24.05)
Digest Top Lists Business Lifestyle Markets Software Technology
This Week’s Top Deals, Major Investments in AI, IT, Web3, and Crypto (20-24.05)
May 24, 2024