CoreWeave Raises $221M to Scale Its Cloud Infrastructure for Generative AI and LLM
To improve your local-language experience, sometimes we employ an auto-translation plugin. Please note auto-translation may not be accurate, so read original article for precise information.
Magnetar Capital led the round, with participation from NVIDIA, former GitHub CEO Nat Friedman and former Apple executive Daniel Gross.
The funding will be used to further expand CoreWeave’s specialized cloud infrastructure for AI and machine learning.
CoreWeave, a specialized cloud provider built for large-scale GPU-accelerated workloads, has secured $221 million in a Series B round led by alternative asset manager Magnetar Capital. NVIDIA, along with former GitHub CEO Nat Friedman and former Apple executive Daniel Gross, participated in the round.
Before its focus on delivering cloud infrastructure, CoreWeave operated as an Ethereum mining company. According to a press release, the company’s latest funding will enable it to continue expanding its specialized cloud infrastructure that caters to compute-intensive tasks such as artificial intelligence, machine learning, visual effects and rendering, batch processing, and pixel streaming to meet the growing demands in generative AI technology.
The new funds will also facilitate the expansion of CoreWeave’s data center operations within the United States, including the opening of two new centers this year. This will bring the total number of North American-based data centers operated by CoreWeave to five.
“With the seemingly limitless boundaries of AI applications and technologies, the demand for compute-intensive hardware and infrastructure is higher than it’s ever been,” said Ernie Rogers, Magnetar’s chief operating officer. “CoreWeave’s innovative, agile and customizable product offering is well-situated to service this demand, and the company is consequently experiencing explosive growth to support it.”
NVIDIA’s participation in the funding round also marks an expansion of its collaboration with CoreWeave. At the NVIDIA GTC conference in March, NVIDIA unveiled its latest data center GPU, the NVIDIA H100 Tensor Core, and the NVIDIA HGX H100 platform.
CoreWeave also announced at the conference that it had launched its HGX H100 clusters, which are currently used by clients like Anlatan, the creators of NovelAI. Along with HGX H100, CoreWeave provides over 11 NVIDIA GPU SKUs that are interconnected using the NVIDIA Quantum InfiniBand in-network computing platform. These resources are available to clients via reserved instance contracts or on-demand.
Commenting on today’s news, Manuvir Das, Vice President of Enterprise Computing at NVIDIA, said: “AI has reached an inflection point, and we’re seeing accelerated interest in AI computing infrastructure from startups to major enterprises. CoreWeave’s strategy of delivering accelerated computing infrastructure for generative AI, large language models, and AI factories will help bring the highest-performance, most energy-efficient computing platform to every industry.”
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.