In Brief
NVIDIA is partnering with leading cloud service providers to offer AI as a service.
Customers will be able to access NVIDIAβs AI supercomputer, acceleration libraries software or pretrained generative AI models as a cloud service.
NVIDIA DGXβ’ AI supercomputer is accessible through the NVIDIA DGX Cloud, which is already offered on Oracle Cloud Infrastructure, with Microsoft Azure, Google Cloud Platform coming soon.
The Trust Project is a worldwide group of news organizations working to establish transparency standards.

Artificial intelligence computing company NVIDIA announced on Wednesday a new initiative to offer Artificial Intelligence as a service (AIaaS) through partnerships with major cloud service providers.
This new service will provide enterprise customers with access to NVIDIA’s cutting-edge AI platform, which includes an AI supercomputer, acceleration libraries, software, and pretrained generative AI models.
It will also allow customers to engage each layer of NVIDIA AI through their browser. The NVIDIA DGX AI supercomputer will be accessible through the NVIDIA DGX Cloud, which is already offered on Oracle Cloud Infrastructure, with Microsoft Azure, Google Cloud Platform, and others expected to follow soon.
“AI is at an inflection point, setting up for broad adoption reaching into every industry,β
said Jensen Huang, founder and CEO of NVIDIA said in a press release.
βFrom startups to major enterprises, we are seeing accelerated interest in the versatility and capabilities of generative AI. We are set to help customers take advantage of breakthroughs in generative AI and large language models. Our new AI supercomputer, with H100 and its Transformer Engine and Quantum-2 networking fabric, is in full production,β he added.
Customers using NVIDIA’s AI as a service will have access to two layers of the NVIDIA AI platform. The first is the AI platform software layer, where they can access NVIDIA AI Enterprise to train and deploy large language models or other AI workloads.
The second layer is the AI-model-as-a-service layer, where customers can use NVIDIA’s NeMo and BioNeMo customizable AI models to build proprietary generative AI models and services for their businesses.
In recent years, NVIDIA has become increasingly focused on developing specialized AI chips and services to meet the growing demand for AI applications, especially with generative AI services like ChatGPT opening a new market for AI chips.
One of NVIDIA’s most notable AI chips is the Tensor Processing Unit (TPU), which is designed specifically for deep learning applications. TPUs are capable of performing massive amounts of mathematical operations in parallel, which is essential for training deep neural networks.
Another important AI chip developed by NVIDIA is the Jetson family of embedded systems, which are designed for edge computing applications. Jetson devices are small, low-power computers that can be integrated into robots, drones, and other devices to enable AI-powered capabilities like object detection and recognition, autonomous navigation, and more.
In addition to its hardware offerings, NVIDIA also provides a range of AI services, including the NVIDIA Deep Learning Institute (DLI), which provides training and certification for developers, researchers, and data scientists looking to expand their AI skills.
NVIDIA also offers several cloud-based AI services, such as the NVIDIA GPU Cloud (NGC), which provides access to pre-built deep learning models and software tools for developers.
Disclaimer
Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.