The Cost of Progress: When AI Becomes a Weapon
In Brief
AI enhances daily tasks, but also poses risks in cybercrime. DeNet experts discuss staying ahead in this evolving landscape, focusing on decentralized secure storage.
AI has become a part of our daily lives. It helps us get work done, tackle endless tasks — and even think for us sometimes. But like two sides of the same coin, it brings both benefits and risks. That same friendly chat assistant can just as easily be turned into a hacker’s tool. Progress always comes with a price.
In this article, we take a look at how AI is reshaping cybercrime. Today, attacks can be launched faster and more efficiently than ever before. Experts from DeNet — leaders in decentralized secure storage — share their perspective on staying one step ahead in this changing landscape.
Cybercrime on the Rise
Cybercrime has always been a challenge, but in recent years it’s hitting harder than ever — and the spread of technologies like AI is one reason this trend is accelerating. Global losses from cybercrime are expected to reach $10.8 trillion by 2026 — up from $3 trillion in 2015, according to Cybersecurity Venture. Every minute, criminals cause massive financial damage, and the trend shows it’s not going to slow down.
“Recent attacks show that everyone is at risk: individuals, companies, and governments. Ignoring your digital security makes the consequences inevitable — like skipping brushing your teeth and then being surprised by cavities,” said Den Shelestov, co-founder at DeNet.
As cyber threats grow, many companies are falling behind. A Darktrace survey found that 78% of security leaders see AI-driven threats as a real challenge, and many don’t have the right skills, knowledge, or staff to respond effectively.
Hacking, Now on Steroids
Cybercrime is almost as old as computers themselves. Networks appeared, exploits followed — and every new technology is weaponized in record time. That’s the cruel law of progress: it creates opportunities for heroes and villains alike.
Hackers follow a simple playbook: pick a target, gather information, find a way in, move through the system, and grab data or demand a ransom. AI can assist at every stage.
It used to take days, deep technical skills, and a whole team to pull off a full attack. Now AI finds the holes, writes the exploits, and even pinpoints the best pressure points for ransom demands, so even a kid with a couple of scripts can assemble working malware.
But wait… aren’t LLMs like ChatGPT designed to block malicious activity?
In theory — yes. In practice — not really. Hackers get around safeguards with prompt injections or jailbreaks, and some even build unrestricted LLMs. These black‑market models can generate malware, phishing campaigns, deepfakes, and more.
How Hackers Are Weaponizing Technology
Social engineering has always been a weak spot — and AI has now supercharged it. According to SentinelOne, phishing attacks have jumped 1,265%. It’s hardly surprising, given that AI can craft hundreds of highly personalized, convincing messages. You might think a colleague shared a link to a project update — but one click, and malware silently infects your system.
Deepfakes are another growing threat. Advances in synthetic audio and video have reached the point where detecting fakes is extremely difficult. Attackers steal identities — top managers, executives, and clients — to trick employees into handing over money, sensitive data, or system access. In 2024, Arup suffered massive losses for exactly this reason: an employee was deceived during a fake Zoom call and ended up transferring $25 million to the attackers.
The next frontier is agentic AI — autonomous systems that decide and act with minimal oversight. In August 2025 Anthropic confirmed that their Claude model was used in a rapid, multi‑target data‑extortion campaign hitting at least 17 organizations. Claude didn’t just follow instructions — it ran the attack end‑to‑end: reconnaissance, exploitation, and bespoke ransom calculations — adapting at every step.
Attacks are faster, smarter, and less forgiving. How do we stop something that can learn and improvise on the fly?
What’s next?
The rules of the game have changed. Human teams alone can’t keep up anymore. That’s why companies are fighting fire with fire, defensive AI is becoming the new frontline. Systems powered by machine learning can detect anomalies, respond to threats in real time, and analyze patterns far beyond human capacity. Still, even the best systems can’t stop someone from clicking the wrong link. Training remains essential — don’t trust every email, every call.
But there is another big problem. People and companies are unintentionally feeding the cybercrime machine. Massive amounts of sensitive data — emails, job titles and documents — are stored across organizations and third-party services. A single leak can be combined with other breaches and public information to assemble complete profiles, making identity theft, account takeovers, and highly targeted attacks far easier.
“Data leaks happen far too often — it’s something we’ve been concerned about for a long time. Companies collect massive amounts of sensitive user data — which naturally makes them a golden target for attackers. And there’s always the risk of a leak. We trust third parties to store our data, but they can’t always guarantee its safety,” Den tells.
That concern led to DeNet, a decentralized storage protocol designed to put control back in users’ hands. Data is encrypted on the client side, split into fragments, and distributed across a network of independent nodes. Each fragment is stored on multiple devices, eliminating single points of failure. Only the user’s private key can reconstruct the full data — shifting responsibility and power from corporations to individuals.
Changing the attitude towards data storage is one way to fight back against cyber attackers — but it’s only part of the puzzle. Technology keeps evolving, new attack methods constantly appear, and the struggle between cybercrime and cybersecurity is never-ending. The only universal rule is this: your data has value, and first and foremost, you are responsible for it. Protect it wisely, and you can stay in control.
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Victoria is a writer on a variety of technology topics including Web3.0, AI and cryptocurrencies. Her extensive experience allows her to write insightful articles for the wider audience.
More articles
Victoria is a writer on a variety of technology topics including Web3.0, AI and cryptocurrencies. Her extensive experience allows her to write insightful articles for the wider audience.