Data Heists Evolve: How Cybercriminals Are Using Artificial Intelligence to Steal Your Data
In an exclusive interview with Carlos Salort, Senior Data Scientist at Forta, Metaverse Post explores the intersection of AI and cyber threats.
Salort sheds light on how malicious actors employ AI to amplify traditional hacking techniques and discusses the evolving landscape of cyberattacks.
AI integration in the digital world has revolutionized not only the way we live and work but also the sinister world of cybercrime.
In an exclusive interview with Carlos Salort, Senior Data Scientist at Forta, a cybersecurity network, Metaverse Post delves into the intersection of AI and cyber threats. Sarlot offers insights into how malevolent actors harness AI to amplify traditional hacking techniques, evolving cyberattacks, and the strategies and technologies the cybersecurity industry is employing to stay one step ahead of cybercriminals.
Furthermore, he sheds light on Forta’s cutting-edge AI-driven approach to safeguarding Web3 systems, providing a unique perspective on the battle to protect digital assets and user data in an increasingly complex and interconnected digital world.
AI-Powered Cybercriminal Tactics Threaten User Data and Digital Assets
Salort explains that AI is a powerful tool for many tasks, and like any tool, when used with ill intent, can be dangerous. “Think as an example of a very common type of scam, emails with social engineering. This type of attack preys on people less familiar with digital technologies, and one of the easiest ways to recognize them is by finding the many irregularities in the emails (plenty of orthographic mistakes, unreliable email domain, etc.).”
“Now imagine if those emails were perfectly written: while there are other ways of detecting them, it suddenly becomes much harder. This is what happens when cybercriminals start using Large Language Models (LLMs) to write these scams. This type of model can generate text which is almost impossible to differentiate from human-written text, increasing the efficacy of this type of scam,” he added.
AI also aids hackers in circumventing spam filters and creating seemingly legitimate email addresses, adding to the sophistication of their attacks.
Ransomware attacks have also seen a significant AI-driven transformation. Hackers now employ AI to encrypt files and subsequently demand ransoms from their victims. AI comes into play by assisting hackers in evading antivirus software and pinpointing the most valuable files to encrypt, maximizing their leverage over victims.
Deepfakes, another disturbing facet of AI exploitation, allow hackers to create remarkably realistic fake videos, images, or audio recordings. Deceptive media assets serve purposes like blackmail, propaganda, and spreading misinformation. AI enables the creation of such content and allows hackers to manipulate facial expressions, voices, or gestures of real individuals, further blurring the line between fact and fiction.
Furthermore, AI has found application in creating and managing botnets—networks of compromised devices controlled by cybercriminals. These botnets can be utilized for various malicious activities, including distributed denial-of-service (DDoS) attacks, spam distribution, and data theft. AI plays a pivotal role in streamlining the coordination and optimization of these botnet activities, making them more potent and elusive threats.
“One way that cybercriminals can try to exploit AI is by reverse-engineering AI engines. If they can simulate the AI systems, they will know if an attack that they are planning will be detected by a system. That’s one of the reasons why at Forta we have multiple approaches to detect attacks, as with a greater volume of security tools, it becomes harder for cybercriminals to reverse engineer them all,”Sarlot said.
How Cybersecurity Tackles AI-Driven Threats
The race between cybersecurity and cybercriminals is about one- upping the opposite faction. For some types of attacks, cybercriminals run simulations on how they can extract benefits from front-running (or acting before) certain transactions take place, the expert shared with Metaverse Post.
These types of attacks are difficult to identify beforehand, but they can be prevented by using the same techniques: If security researchers discover that one of these attacks may take place, they can run the same simulations as the criminals and front-run the criminals. This is known as white hat hacking, and in that case, the funds will be extracted from the original owner but will belong to the security researchers instead of the criminals.
There are also other ways of using AI technologies to improve defense. Once a cybercriminal finds a new type of scam, and it’s detected by security researchers (for the first time, normally after it has taken place), there is already a sample of how the attack works, showing all the preparation, etc.
“With novel AI techniques, cybersecurity researchers can train models that will detect this new scam (to a certain extent), without needing to wait to have a lot of examples of the scam taking place,”he added.
AI and Web3 Synergy in Cybersecurity
Sarlot said that due to its novelty, security around Web3 applications is still not as developed as security around Web2. In practice, this allows hackers to be successful using less refined techniques, as the defenses towards those are still not fully developed. Therefore, they still haven’t developed a lot of attacks based on attacks. But given that security is evolving to catch up with cybercriminals, there is no doubt that they will eventually evolve to use these new techniques.
The Forta Network works by monitoring the blockchain in real-time. The AI-based bots running in the network run based on multiple types of models. Some bots run as an AI-ensemble, combining the alerts generated from a myriad of bots to ensure that new attacks are covered.
Another bot detects similarities between contracts, triggering alerts when a newly deployed contract shares a structure or functions with known malicious contracts. Also, there’s a bot that employs AI to identify addresses associated with scammers based on their transaction history.
Data generated by the AI bots can be used in multiple ways. Wallets can analyze transactions to understand if they may be malicious, and security teams can use these real-time alarms to enhance their security knowledge. Also, anyone (end users, DeFi protocol/bridges developer team, cybersecurity researchers) can use the network to deploy custom bots, enriching the data that everyone can benefit from.
“Don’t settle. Cybercriminals are always on the move and trying to one-up current security standards. Security should be among the highest priorities for any organization, being involved in best practices, and collaborating with other security teams, as together it will be easier to defend from cyber-attacks.”Salort
- AI Tool Era Has Arrived for Cybersecurity Exploits: WormGPT, PoisonGPT, DAN
- Cybercriminals Use FraudGPT to Automate Hacking and Data Theft
- Metaverse is a hacker’s playground; cybercriminals set their eyes on virtual worlds
Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.
The Trust Project is a worldwide group of news organizations working to establish transparency standards.