Interview Business Technology
January 20, 2026

From Risk To Responsibility: Ahmad Shadid On Building Secure AI-Assisted Development Workflows

In Brief

“Vibe coding” is proliferating, but experts warn that traditional tools pose security and confidentiality risks for enterprise code, highlighting the need for encrypted, hardware-backed “confidential AI” solutions.

From Risk To Responsibility: Ahmad Shadid On Building Secure AI-Assisted Development Workflows

In recent months, “vibe coding”—an AI-first workflow where developers leverage large language models (LLMs) and agentic tools to generate and refine software—has gained traction. At the same time, multiple industry reports have highlighted that while AI-generated code offers speed and convenience, it often introduces serious security and supply chain risks.

Veracode research found that nearly half of the code produced by LLMs contains critical vulnerabilities, with AI models frequently producing insecure implementations and overlooking issues such as injection flaws or weak authentication unless explicitly prompted. A recent academic study also noted that modular AI “skills” in agent-based systems can carry vulnerabilities that may enable privilege escalation or expose software supply chains.

Beyond insecure outputs, there is an often-overlooked systemic confidentiality risk. Current AI coding assistants process sensitive internal code and intellectual property within shared cloud environments, where providers or operators may access the data during inference. This raises concerns about exposing proprietary production code at scale, which is a considerable issue for individual developers and large enterprises.

In an exclusive interview with MPost, Ahmad Shadid, founder of OLLM—the confidential AI infrastructure initiative—explained why traditional AI coding tools are inherently risky for enterprise codebases and how confidential AI, which keeps data encrypted even during model processing, provides a viable path for secure and responsible vibe coding in real-world software development.

What happens to sensitive enterprise code in AI coding assistants, and why is it risky?

Most current coding tools can only protect data to a certain level. Enterprise code is usually encrypted while being sent to the provider’s servers, usually through TLS. But once the code arrives on those servers, it gets decrypted in the memory so the model can read and process it. At that point, sensitive details such as proprietary logic, internal APIs, and security details are presented in plain text in the system. And that’s where the risk lies.

The code may pass through internal logs, temporary memory, or debugging systems that are difficult for customers to see or audit while being decrypted. Even if a provider guarantees no saved data, the exposure still happens during processing, and that short window is enough to create blind spots. For enterprises, this creates a potential risk that exposes sensitive code to misuse without proprietary control.

Why do you believe mainstream AI coding tools are fundamentally unsafe for enterprise development? 

Most popular AI coding tools aren’t built for enterprise risk models; they only optimize speed and convenience because they are trained largely on public repositories that contain known vulnerabilities, outdated patterns, and insecure defaults. As a result, the code they produce typically exhibits vulnerabilities unless it undergoes thorough examination and correction.

More importantly, these tools operate with no formal governance structures, so they don’t really enforce internal security standards at the early phase, and this creates a disconnect between how software is programmed and how it is later audited or protected. This eventually causes teams to get used to working with outputs they barely understand, while security lags quietly increase. This combination of lack of transparency and technical implications makes standard support almost impossible for organizations operating in safety-first domains.

If providers don’t store or train on customer code, why isn’t that enough, and what technical guarantees are needed?

Assuring policy is quite different from technical guarantees. User data is still decrypted and processed during computation, even when providers assure there won’t be retention. Temporary logs during debugging processes can still create leakage paths that policies are not capable of preventing or proving for safety. From a risk perspective, trust without verification isn’t enough.

Businesses should rather focus on promises that can be established at the infrastructure level. This includes confidential computing environments where the code isn’t only encrypted when being transferred but also while being used. A very good example is the hardware-backed trusted execution environment, which creates an encrypted environment where even the infrastructure operator cannot access the sensitive code. The model processes data in this secure environment, and remote attestation allows enterprises to cryptographically verify that these safety measures are active.

Such mechanisms should be a baseline requirement, because they turn privacy into a measurable property and not just a promise.

Does running AI on-prem or in a private cloud fully resolve confidentiality risks?

Running AI in a private cloud helps to reduce some risks, but it does not solve the problem. Data is still very much visible and vulnerable when it’s being processed unless extra protections are put in place. Consequently, internal access, poor setup, and movement inside the network can still lead to leaks.

Model behavior is another concern. Although private systems log inputs or store data for testing, without strong isolation, these risks remain. Business teams still need encrypted processing. Implementing hardware-based access control and establishing clear limits on data use are essential for safely protecting data. Otherwise, they only avoid the risk but do not solve it.

What does “confidential AI” actually mean for coding tools?

Confidential AI refers to systems that manage data security during computation. It allows data to be processed in an isolated enclave, such as hardware-based trusted execution environments, but in clear text so the model can work on it. The hardware isolation enforcement then ensures it is inaccessible to the platform operator, the host operating system, or any external party, while also providing a cryptographically verifiable privacy, without affecting the AI functional capacity.

This completely changes the trust model for coding platforms, as it allows developers to use AI without sending proprietary logic into shared or public systems. The process also enhances clear accountability because the access boundaries are built by hardware rather than policy. Some technologies go further by combining encrypted computation with historical tracking, so outputs can be verified without revealing inputs.

Although the term sounds abstract, the implication is simple: AI assistance no longer requires businesses to sacrifice confidentiality for effectiveness.

What are the trade-offs or limitations of using confidential AI at present?

The biggest trade-off today is speed. AI systems isolated in trusted execution environments may experience some delay compared to unprotected structures, simply as a result of hardware-level memory encryption and attestation verification. The good news is that newer hardware is closing this gap over time.

Also, more work setup and proper planning are required, as the systems must operate in tighter environments. Cost must also be considered. Confidential AI often needs special hardware — specialized chips like NVIDIA H100 and H200, for example — and tools, which can push up initial expenses. But the costs must be balanced against potential damage that could come from code leaks or failure to comply with regulations.

Confidential AI is not yet a universal system requirement, so teams should use it where privacy and accountability matter most. Many of these limitations will be solved.

Do you expect regulators or standards to soon require AI tools to keep all data encrypted during processing?

Regulatory frameworks such as the EU AI Act and the U.S. NIST AI Risk Management Framework already strongly emphasize on risk management, data protection, and accountability for high-impact AI systems. As these frameworks develop, systems that expose sensitive data by design are becoming harder to justify under established governance expectations.

Standards groups are also laying the foundations by setting clearer rules for how AI should handle data during use. These rules may roll out at different speeds across regions. Still, companies should expect more pressure on systems that process data in plain text. This way, confidential AI is less about guessing the future and more about matching where regulation is already heading.

What does “responsible vibe coding” look like right now for developers and IT leaders?

Responsible vibe coding simply is staying accountable for every line of code, from reviewing AI suggestions to validating security implications, as well as considering every edge case in every program. For organizations, this takes a clear definition of policies on specific tool approval and safe pathways for sensitive code, while ensuring teams understand both the strengths and limits of AI assistance.

For regulators and the industry leaders, the task means designing clear rules to enable teams to easily identify which tools are allowed and where they can be used. Sensitive data should only be allowed into the systems that obey privacy and compliance requirements, while also training the operators and users to understand the power of AI and its limitations. AI saves effort and time when used well, but it also carries costly risks if used carelessly.

Looking ahead, how do you envision the evolution of AI coding assistants with respect to security?

AI coding tools will evolve from being merely recommendations to verifying code as it is written while adhering to rules, authorized libraries, and security constraints in real time.

Security, as it matters, will also be built deeper into how these tools run by designing encrypted execution and clear decision-making records as normal features. Over time, this will transform AI assistants from risks into support tools for safe development. The best systems will be the ones that combine speed with control. And trust will be determined by how the tools work, not by the builders’ promise.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

More articles
Alisa Davidson
Alisa Davidson

Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

The Calm Before The Solana Storm: What Charts, Whales, And On-Chain Signals Are Saying Now

Solana has demonstrated strong performance, driven by increasing adoption, institutional interest, and key partnerships, while facing potential ...

Know More

Crypto In April 2025: Key Trends, Shifts, And What Comes Next

In April 2025, the crypto space focused on strengthening core infrastructure, with Ethereum preparing for the Pectra ...

Know More
Read More
Read more
Gate Live App Overhaul Boosts Engagement With New UI, Popularity Indicators, And Infinite Scroll
News Report Technology
Gate Live App Overhaul Boosts Engagement With New UI, Popularity Indicators, And Infinite Scroll
January 20, 2026
GoMining: Bitcoin Holders Seek Low Fees And Digital-First Payments, But Merchant Adoption Remains A Challenge
News Report Technology
GoMining: Bitcoin Holders Seek Low Fees And Digital-First Payments, But Merchant Adoption Remains A Challenge
January 20, 2026
Past Week In Crypto: $1.42B ETF Inflows, A $98K Bull Trap, And A Fast Reset To $92K
Markets News Report Technology
Past Week In Crypto: $1.42B ETF Inflows, A $98K Bull Trap, And A Fast Reset To $92K
January 20, 2026
Bermuda Unveils Plans To Become The World’s First Fully Onchain National Economy With Support From Circle And Coinbase
Business News Report Technology
Bermuda Unveils Plans To Become The World’s First Fully Onchain National Economy With Support From Circle And Coinbase
January 20, 2026