The AI Cold War Has Begun, But Are We Missing The Point?
In Brief
The emerging “AI Cold War,” highlighted by alleged distillation attacks on Anthropic’s Claude LLM, underscores growing risks to intellectual property, national security, and the ethical deployment of AI, while prompting debate over decentralized versus siloed AI development.
It was bound to happen. But even so, the initial waves of the emerging “AI Cold War” are still disturbing, calling into question the protections companies have in place, our ability to reliably protect IP from bad actors, and how this cold war could escalate. More than anything, however, it should spark a conversation around how we value AI, what problems we want it to solve, and who should ultimately wield control over the world’s greatest potential game changer.
The Cold War in Action
So what exactly happened? According to various reports, Anthropic AI disclosed publicly that it had been at the center of a sustained, invasive action by three Chinese AI labs—DeepSeek, Moonshot AI, and MiniMax. Rather than hack into the Anthropic data center directly, these labs allegedly created hordes of intelligent bots that essentially used Anthropic’s Claude LLM to pull out information they could then use to improve their own models.
More specifically, the waves of attacks used a form of espionage called “distillation.” Essentially, this is when an account can ask an LLM like Claude many different, targeted questions, then use the answers to better understand how the AI model works. This allows the bad actor to update their own models with this new insight. In the case of these recent attacks, Anthropic stated that the attack was massive, covering over 16 million exchanges between 24,000+ accounts and Claude, recording the results and developing key insights into how the model thinks and behaves. According to Anthropic’s announcement, three Chinese firms were involved. Using fraudulent accounts that were masked in proxy services, the firms’ apparent aim was to gain as much intellectual knowledge on Claude, especially in those areas the AI model is strongest. DeepSeek targeted areas such as reasoning capabilities, rubric-style grading tasks, and a better understanding on how to side-step censorship on politically/societally sensitive topics. Moonshot AI focused on agentic reasoning, coding, and computer vision processes. MiniMax targeted Claude’s agentic coding, along with AI agents’ use of tools.
To be clear, Chinese companies are already prohibited from using Claude due to a variety of risks (distillation being one of them). The companies used proxies that operated thousands of bot accounts intermixed with valid account requests, making it more difficult to catch the many bot accounts, and nearly impossible to shut them all down as they didn’t have an obvious common source. This failure should have been a strong first line of defense, but instead it was easily side-stepped. While Claude has announced countermeasures to this type of attack, their effectiveness is far from proven at this point.
What Does It Mean for AI?
The irony is that these firms didn’t outright hack Claude. Instead, they created a large number of bots to effectively use Claude in focused manners. In a way, these bots played “20 Questions” with Claude, but instead of guessing the right answer, every answer Claude gave provided more and more insight into how it thinks.
Whether it involves breaking through a firewall or creating bots to interact with Claude as intended, the result was intellectual theft. In the various cases, the offending firms were likely able to learn a great deal about Claude as an incredibly complex AI model. This could give a competitor enough insight to build their own model while skipping the enormous tasks of finding, organizing, and storing the data needed to train the model. Instead, these firms allegedly gained the IP without doing any of the real work.
But the risks go far beyond the theft of IP. Distilled models might act like properly trained models most of the time, but improper training creates some major holes in the reasoning and tool use of a distilled model. It can be built without necessary safeguards that could disclose sensitive information, but could also provide dangerous insight to users, resulting in harm to users. Furthermore, understanding how a model like Claude works could create national security risks, as the bad actor could conceivably feed it training data in such a way that the model would behave in a predictable manner, allowing the bad actor to manipulate the results of the model. Given that Anthropic and other AI giants are discussing deals with the Department of Defense, this has concerning implications.
While Anthropic has announced it has learned from this, the Cold War for AI has begun and can only escalate. Bad actors will find a way around their countermeasures, Anthropic will respond, and the arms race will continue. Ultimately this creates vulnerabilities in the AI companies investing in training models correctly, but also discourages the major investments needed for this if a rival company can swoop in and reverse engineer that intellectual property without putting in the time or the money.
Can It Be Stopped? Should It?
Given the nature of these types of escalations, this will likely become the reality in global AI competition. That said, there is a case to be made for side-stepping the AI cold war altogether. The AI giants of the world are focused on protecting their work and closely guarding any type of intellectual property. This naturally places massive power around the necks of a few, generating a massive imbalance in the global power dynamics.
A growing alliance of AI players, called the ASI Alliance, have suggested that instead of building indefensible silos of IP, perhaps the world would be able to build even stronger AI through the use of open, decentralized AI. The ASI Alliance works to push the development of decentralized AI, building on the data, analysis, and tools (such as developer-first LLMs and agent frameworks), maintaining that AI is a technology that is meant to succeed the most if its capabilities are available to all. It flies in the face of IP-protectionism, but avoids the desire for companies to find ways to steal models, data, and innovation. Even an industry like AI, if decentralized and available to all, would still create a very fertile environment for protection innovation and investments that can actually be protected.
Final Thoughts
Before the AI Cold War continues to escalate, we should consider not how to further build up protections for our own IP, but rather how to avoid the need for just siloed innovations at all. Decentralized AI could very well become the future as isolated AI IP faces many new threats ahead, creating a stable base for AI to then be developed, shared, building revenue streams in a crucially different manner.
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Gregory, a digital nomad hailing from Poland, is not only a financial analyst but also a valuable contributor to various online magazines. With a wealth of experience in the financial industry, his insights and expertise have earned him recognition in numerous publications. Utilising his spare time effectively, Gregory is currently dedicated to writing a book about cryptocurrency and blockchain.
More articles
Gregory, a digital nomad hailing from Poland, is not only a financial analyst but also a valuable contributor to various online magazines. With a wealth of experience in the financial industry, his insights and expertise have earned him recognition in numerous publications. Utilising his spare time effectively, Gregory is currently dedicated to writing a book about cryptocurrency and blockchain.