Cybercriminals Using LLM Chatbots For Phishing: How Generative AI Can Counteract It
To improve your local-language experience, sometimes we employ an auto-translation plugin. Please note auto-translation may not be accurate, so read original article for precise information.
Since the introduction of ChatGPT in late 2022, a significant surge in malicious phishing emails has been observed, with 1,265% increase.
Some of the most common users of large language model (LLM) chatbots are cybercriminals, reveals real-time threat intelligence provider SlashNext report. But, what exactly is the purpose of using LLM chatbots?
According to the report titled ‘The State of Phishing 2023’ cybercriminals are leveraging the tool to help write business email compromise (BEC) attacks and systematically launch highly targeted phishing attacks.
Since the introduction of OpenAI’s ChatGPT in late 2022, a significant surge in malicious phishing emails has been observed, with a staggering 1,265% increase.
Notably, 68% of these phishing emails employ text-based Business Email Compromise (BEC) tactics. This alarming trend underscores growing apprehensions regarding the role of chatbots and jailbreaks in facilitating a rapid proliferation of phishing attacks by enabling cybercriminals to launch highly sophisticated and swift offensives.
When asked, 40% of the respondents (cybersecurity professionals) indicated that they currently employ ChatGPT for composing emails, both for personal and professional use-cases.
It is unsurprising that this utilization extends to cybercriminals, as email composition is one of the most prevalent applications of ChatGPT for hackers. Cyber attackers exploit ChatGPT’s capabilities to aid in crafting Business Email Compromise (BEC) attacks and orchestrating meticulously targeted phishing campaigns.
Bypassing ChatGPT Like Chatbots – Not a Complex Task
While AI chatbots like ChatGPT possess extensive knowledge and can generate text on a wide range of topics, they are subject to certain restrictions to prevent various issues.
These restrictions are in place due to the nature of their training data and the potential for harmful or inappropriate responses. This article explores why ChatGPT is restricted, and it provides techniques to work around these limitations.
However, by framing questions with context, avoiding problematic phrases, and addressing scenarios from a third-person perspective, users can often get more helpful responses. Additionally, users can bypass character limits by asking for smaller portions of text at a time.
Can Generative AI Be the Rescue Hero?
In a rapidly evolving digital landscape, the confluence of AI’s malicious potential and the changing dynamics of remote and hybrid work environments has raised concerns about the escalating risk of cyberattacks.
With employees dispersed across multiple devices and communication channels, organizations are increasingly vulnerable to security breaches. To counter this threat, there is a growing demand for generative AI security solutions designed to protect against advanced cyberattacks such as Business Email Compromise (BEC), supply chain attacks, executive impersonation, and financial fraud.
Generative AI security solutions are good at identifying threats that manipulate human emotions, leveraging tactics like fear or trust to prompt swift actions. By simulating these human emotions and behaviors in their detection process, generative AI security solutions provide a robust defense against the latest cyber threats.
One notable player in this field is SlashNext. The company’s technology is adept at detecting, predicting, and halting attacks such as spear phishing, BEC, and Smishing, all of which exploit zero-hour social engineering techniques. This advanced system operates seamlessly across email, mobile, and web messaging applications, offering a multi-pronged defense.
According to SlashNext, its solutions combine natural language processing, computer vision, machine learning, relationship graphs, and deep contextualization to counteract sophisticated multi-channel messaging attacks.
The company has the ability to anticipate a wide array of AI-generated BEC threats by employing AI data augmentation and cloning technologies.
This approach allows the system to assess core threats and generate numerous variations, effectively training itself to recognize and thwart potential risks, it added.
As organizations grapple with the ever-growing cybersecurity challenges posed by AI-driven threats, generative AI security solutions are invaluable allies in the quest to safeguard sensitive data and maintain digital integrity. The future of cybersecurity is increasingly reliant on these innovative technologies, ensuring that organizations can stay one step ahead of cybercriminals and emerging threats.
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.