Contextual AI Partners with Google Cloud to Deliver Generative AI for Enterprises
Contextual AI has partnered with Google Cloud to scale and train its Large Language Models (LLMs) for the enterprise.
Using Google Cloud’s infrastructure, Contextual Language Models (CLMs) will produce responses trained on the data and institutional knowledge of enterprises.
Contextual AI will use Google Cloud’s GPU VMs to build and train its LLMs.
Contextual AI, a company building large language models (LLMs) for enterprises, today announced a partnership with Google Cloud. The company announced its selection of Google Cloud as the preferred cloud provider. This choice encompasses business expansion, operational needs, scaling, and the training of its LLMs.
Under this partnership, Contextual AI will take advantage of Google Cloud’s GPU Virtual Machines (VMs) for constructing and training its models. The cloud provider offers A3 VMs and A2 VMs powered by the NVIDIA H100 and A100 Tensor Core GPUs, respectively.
Launched out of stealth following a $20 million seed raise in June, the company also plans to leverage Google Cloud’s specialized AI accelerators, the Tensor Processor Units (TPUs), to build its next generation of LLMs.
“Building a large language model to solve some of the most challenging enterprise use cases requires advanced performance and global infrastructure,” said Douwe Kiela, chief executive officer at Contextual AI. “As an AI-first company, Google has unparalleled experience operating AI-optimized infrastructure at high performance and at global scale which they are able to pass along to us as a Cloud customer.”
The company announced its intention to construct contextual language models (CLMs) on the Google Cloud platform. These models will be customized to produce responses aligned with the distinct data and institutional knowledge of each enterprise.
Contextual AI claims that this approach not only bolsters the precision and effectiveness of AI-powered interactions but will also empower users to trace answers back to their source documents.
For instance, customer service representatives can now employ Contextual AI’s CLMs to deliver pinpoint responses to user inquiries. They will draw solely from authorized data sources such as the user’s account history, company regulations, and prior tickets concerning analogous questions.
Likewise, financial advisors will gain the capability to automate reporting procedures, furnishing personalized recommendations based on a client’s portfolio and history. The company said that this will encompass proprietary market insights and other confidential data assets.
The Race to Deliver Generative AI for Enterprises
As AI companies race to develop generative AI to help organizations streamline business processes, cloud providers are also competing to provide infrastructure for these companies to build and train their models on.
Just last week, IBM disclosed a partnership with Microsoft aimed at accelerating the deployment of generative AI solutions to their shared enterprise clientele. In June, Oracle, renowned for its cloud applications and platform, joined forces with enterprise AI platform Cohere to offer worldwide organizations access to generative AI services.
Recognizing growing interest from organizations in utilizing generative AI for business purposes, Amazon Web Services (AWS) also unveiled its plans to launch the AWS Generative AI Innovation Center. This center is designed to assist customers in constructing and launching generative AI services.
As AI innovation converges with cloud capabilities, these initiatives symbolize a leap forward in enterprise AI. Not only do they demonstrate the potential of AI-driven solutions, but they also pave the way towards enhanced business efficiencies.
Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.
The Trust Project is a worldwide group of news organizations working to establish transparency standards.