News Report Technology
January 18, 2024

Protect AI Reports Critical Vulnerabilities in Existing AI and ML Systems, Urges Securing Open Source Projects

In Brief

Protect AI report identifies vulnerabilities in tools used within the AI/ML supply chain, often Open Source, with unique security threats.

Protect AI Reports Critical Vulnerabilities in Existing AI and ML Systems, Urges Securing Open Source Projects

There are vulnerabilities in tools used within the AI/ML supply chain, often Open Source, carrying unique security threats and these vulnerabilities pose risks of unauthenticated remote code execution and local file inclusion, according to the report from Protect AI – a cybersecurity company focused on AI and ML systems.

It can result in implications ranging from server takeovers to the theft of sensitive information, the report added.

The report further emphasizes the necessity for a proactive approach in identifying and addressing these vulnerabilities to safeguard data, models, and credentials.

At the forefront of Protect AI’s efforts is huntr, the world’s first AI/ML bug bounty program, engaging a community of over 13,000 members actively hunting for vulnerabilities. This initiative aims to provide crucial intelligence on potential threats and facilitate a swift response to secure AI systems.

In August 2023, the company announced the launch of huntr – an AI/ML bug bounty platform focused exclusively on protecting AI/ML open-source software (OSS), foundational models, and ML Systems. The launch of the huntr AI/ML bug bounty platform comes as a result of the acquisition of huntr.dev by Protect AI.

“With over 15,000 members now, Protect AI’s huntr is the largest and most concentrated set of threat researchers and hackers focused exclusively on AI/ML security,” Daryan Dehghanpisheh, president and co-founder of Protect AI.

“Huntr’s operating model is focused on simplicity, transparency, and rewards. The automated features and Protect AI’s triage expertise in contextualizing threats for maintainers help all contributors of open-source software in AI to build more secure software packages. This ultimately benefits all users, as AI systems become more secure and resilient,” added Dehghanpisheh.

Report Identifies Critical Vulnerabilities

Highlighting the findings of the huntr community in the past month, the report identifies three critical vulnerabilities that include MLflow Remote Code Execution, MLflow Arbitrary File Overwrite and MLflow Local File Include.

  • MLflow Remote Code Execution: The flaw results in server takeover and loss of sensitive information. MLflow, a tool for storing and tracking models, had a remote code execution vulnerability in the code used to pull down remote data storage. Users could be fooled into using malicious remote data sources which could execute commands on the user’s behalf.
  • MLflow Arbitrary File Overwrite: The flaw has the potential for system takeover, denial of service, and destruction of data. A bypass in an MLflow function which validates that a file path is safe was found, allowing a malicious user to remotely overwrite files on the MLflow server. This can lead to remote code execution with additional steps such as overwriting the SSH keys on the system or editing the .bashrc file to run arbitrary commands upon the next user login
  • MLflow Local File Include: The flaw results in the loss of sensitive information and, the potential for system takeover. MLflow, when hosted on specific operating systems, can be manipulated to display the contents of sensitive files, posing a potential avenue for system takeover if essential credentials are stored on the server.

Protect AI’s co-founder Daryan Dehghanpisheh told Metaverse Post, “Urgency in addressing AI/ML system vulnerabilities hinges on their business impact. With AI/ML’s critical role in contemporary business and the severe nature of potential exploits, most organizations will find this urgency high. The primary challenge in securing AI/ML systems lies in comprehending risks across the MLOps lifecycle.”

“To mitigate these risks, companies must conduct threat modeling for their AI and ML systems, identify exposure windows, and implement suitable controls within an integrated and comprehensive MLSecOps program,” he added.

In its report, Protect AI emphasizes the urgency of addressing these vulnerabilities promptly and provides a list of recommendations for users with affected projects in production, underlining the importance of a proactive stance in mitigating potential risks. Users facing challenges in mitigating these vulnerabilities are encouraged to reach out to Protect AI’s community.

As AI technology advances, Protect AI is working towards securing the intricate web of AI/ML systems to ensure responsible and secure harnessing of the benefits of artificial intelligence.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Kumar is an experienced Tech Journalist with a specialization in the dynamic intersections of AI/ML, marketing technology, and emerging fields such as crypto, blockchain, and NFTs. With over 3 years of experience in the industry, Kumar has established a proven track record in crafting compelling narratives, conducting insightful interviews, and delivering comprehensive insights. Kumar's expertise lies in producing high-impact content, including articles, reports, and research publications for prominent industry platforms. With a unique skill set that combines technical knowledge and storytelling, Kumar excels at communicating complex technological concepts to diverse audiences in a clear and engaging manner.

More articles
Kumar Gandharv
Kumar Gandharv

Kumar is an experienced Tech Journalist with a specialization in the dynamic intersections of AI/ML, marketing technology, and emerging fields such as crypto, blockchain, and NFTs. With over 3 years of experience in the industry, Kumar has established a proven track record in crafting compelling narratives, conducting insightful interviews, and delivering comprehensive insights. Kumar's expertise lies in producing high-impact content, including articles, reports, and research publications for prominent industry platforms. With a unique skill set that combines technical knowledge and storytelling, Kumar excels at communicating complex technological concepts to diverse audiences in a clear and engaging manner.

Hot Stories
Join Our Newsletter.
Latest News

From Ripple to The Big Green DAO: How Cryptocurrency Projects Contribute to Charity

Let's explore initiatives harnessing the potential of digital currencies for charitable causes.

Know More

AlphaFold 3, Med-Gemini, and others: The Way AI Transforms Healthcare in 2024

AI manifests in various ways in healthcare, from uncovering new genetic correlations to empowering robotic surgical systems ...

Know More
Read More
Read more
Crypto Exchange Bitstamp Announces Full Accessibility Of Assets For Mt. Gox Creditors And Unveils Separate Plan For UK Customers
Markets News Report Technology
Crypto Exchange Bitstamp Announces Full Accessibility Of Assets For Mt. Gox Creditors And Unveils Separate Plan For UK Customers
July 26, 2024
Cosmos Hub Proposes 1M ATOM Allocation To Hydro For Enhanced Liquidity 
News Report Technology
Cosmos Hub Proposes 1M ATOM Allocation To Hydro For Enhanced Liquidity 
July 26, 2024
The $231 Million Week: How Six Groundbreaking Deals Are Forging the Future of Crypto, Gaming, and AI”
Digest Top Lists Business Lifestyle Markets Software Technology
The $231 Million Week: How Six Groundbreaking Deals Are Forging the Future of Crypto, Gaming, and AI”
July 26, 2024
Sanctum Unveils stepSOL And Prepares To Roll Out STEP-Incentivized Pools
News Report Technology
Sanctum Unveils stepSOL And Prepares To Roll Out STEP-Incentivized Pools
July 26, 2024