News Report Technology
August 08, 2025

O.XYZ’s Ahmad Shadid Warns National Security Priorities May Undermine Fairness And Transparency In AI

In Brief

Ahmad Shadid highlights that political pressure led to the withholding of a NIST report exposing critical AI vulnerabilities, highlighting the urgent need for transparent, independent, and open research to advance AI safety and fairness.

O.XYZ’s Ahmad Shadid Warns National Security Priorities May Undermine Fairness And Transparency In AI

Before the inauguration of the current United States president, Donald Trump, the National Institute of Standards and Technology (NIST) completed a report on the safety of advanced AI models. 

In October last year, a computer security conference in Arlington, Virginia brought together a group of AI researchers who participated in a pioneering “red teaming” exercise aimed at rigorously testing a state-of-the-art language model and other AI systems. Over the span of two days, these teams discovered 139 new methods to cause the systems to malfunction, such as producing false information or exposing sensitive data. Crucially, their findings also revealed weaknesses in a recent US government standard intended to guide companies in evaluating AI system safety.

Intended to help organizations assess their AI systems, the report was among several NIST-authored AI documents withheld from publication due to potential conflicts with the policy direction of the incoming administration.

In an interview with Mpost, Ahmad Shadid, CEO of O.XYZ, an AI-led decentralized ecosystem, discussed the dangers of political pressure and secrecy in AI safety research.

Who Is Authorized To Release NIST’s Red Team Findings?

According to Ahmad Shadid, political pressure can influence the media, and the NIST report serves as a clear example of this. He emphasized the need for independent researchers, universities, and private laboratories that are not constrained by such pressures.

“The challenge is that they don’t always have the same access to resources or data. That’s why we need — or better said, everyone needs — a global, open database of AI vulnerabilities that anyone can contribute to and learn from,” Ahmad Shadid told Mpost. “There should be no government or corporate filter for such research,” he added.

Concealing AI Vulnerabilities Hampers Safety Progress And Empowers Malicious Actors, Warns Ahmad Shadid

He further explained the risks associated with concealing vulnerabilities from the public and how such actions can hinder progress in AI safety.

“Hiding key educational research gives bad actors a head start while keeping the good guys in the dark,” Ahmad Shadid said.

Companies, researchers, and startups cannot address issues they are unaware of, which can create hidden obstacles for AI firms and result in flaws and bugs within AI models. 

According to Ahmad Shadid, the open-source culture has been fundamental to the software revolution, supporting both continuous development and strengthening programs through the collective identification of vulnerabilities. However, in the field of AI, this approach has largely diminished — for example, Meta is reportedly considering making its development process closed-source.

“What the NIST hid from the public due to political pressure could’ve been the exact knowledge the industry needed to address some of the risks around LLMs or hallucinations,” Ahmad Shadid said to Mpost. “Who knows, bad actors might be busy taking advantage of the ‘139 new ways to break AI systems,’ which were included in the report,” he added.

Governments Tend To Prioritize National Security Over Fairness And Transparency In AI, Undermining Public Trust 

The suppression of safety research reflects a broader issue in which governments prioritize national security over fairness, misinformation, and bias concerns. 

Ahmad Shadid emphasized that any technology used by the general public must be transparent and fair. He highlighted the need for transparency rather than secrecy, noting that the confidentiality surrounding AI underscores its geopolitical significance.

Major economies such as the US and China are investing heavily—including billions in subsidies and aggressive talent acquisition—to gain an advantage in the AI race.

“When governments put the term ‘national security’ above fairness, misinformation, and bias—for a technology like AI that’s in 378 million users’ pockets—they’re really saying those issues can wait. This can only lead to building an AI ecosystem that protects power, not people,” he concluded.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

More articles
Alisa Davidson
Alisa Davidson

Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

Hot Stories
Join Our Newsletter.
Latest News

The Calm Before The Solana Storm: What Charts, Whales, And On-Chain Signals Are Saying Now

Solana has demonstrated strong performance, driven by increasing adoption, institutional interest, and key partnerships, while facing potential ...

Know More

Crypto In April 2025: Key Trends, Shifts, And What Comes Next

In April 2025, the crypto space focused on strengthening core infrastructure, with Ethereum preparing for the Pectra ...

Know More
Read More
Read more
TON Dataset Now Available On AWS Public Blockchain Platform For Enhanced Analytics
News Report Technology
TON Dataset Now Available On AWS Public Blockchain Platform For Enhanced Analytics
August 8, 2025
DJED Becomes Open Source, Paving The Way For The World’s First Multi-Chain Private Stablecoin
News Report Technology
DJED Becomes Open Source, Paving The Way For The World’s First Multi-Chain Private Stablecoin
August 8, 2025
From Dubai To Bali: Crypto Content Creator Campus Heads To Lisbon In November 2025
News Report Technology
From Dubai To Bali: Crypto Content Creator Campus Heads To Lisbon In November 2025
August 8, 2025
Animoca Brands And ProvLabs Partner To Co-Develop NUVA, Enhancing Access to Tokenized RWAs
News Report Technology
Animoca Brands And ProvLabs Partner To Co-Develop NUVA, Enhancing Access to Tokenized RWAs
August 8, 2025