‘Tech Industry Will Move Towards Reduced Reliance on GPUs in 2024,’ claims Greg Osuri, CEO of Akash Network
In Brief
CEO of Overclock Labs and Akash Network, Greg Osuri asserts embracing lesser GPUs will reshape tech landscape in 2024 and unlock ripple effects.
As major players in the tech industry continue to dominate the market with powerful GPUs, a notable shift towards less powerful chips is anticipated in 2024. The move, driven by the necessity for alternatives, is expected to reshape the landscape, enabling smaller companies and startups to contribute significantly to the ongoing AI boom.
The demand for high-performance compute, especially for training large language models, has surpassed the capabilities of traditional providers such as AWS, Microsoft Azure and Google Cloud. Smaller enterprises find it challenging to afford and reserve these high-performance resources, leading to a growing interest in distributed and permissionless networks.
In a conversation with Metaverse Post — Greg Osuri, CEO of Overclock Labs and Akash Network, shed light on the driving factors and potential implications behind this transformative trend.
Decentralized cloud platform Akash Network had recently announced a significant upgrade to cloud with Mainnet 8. The new upgrade introduced key enhancements that aim to simplify GPU access and elevate the deployment experience.
Greg Osuri identifies optimizing data set requirements as a key element in embracing lesser GPUs.
Low-Rank Adaptation (LoRA) emerges as a crucial technique in this shift. This strategic modification focuses on critical weights, reduces the number of parameters needed, and preserves the original pre-trained knowledge in the model.
“Those seeking alternatives amid the GPU squeeze will make progress by using less intensive data set requirements, deploying more efficient techniques like Low-Rank Adaptation (LoRA) to train language models, and distributing workloads in a parallel manner,” Akash Network’s Greg Osuri told Metaverse Post. “This involves deploying clusters of lower-tier chips to accomplish tasks equivalent to a smaller number of A100s and H100s. A new era of cloud computing will emerge, one in which power is decentralized and not in the hands of just a few.”
He says that parallelizing workloads through clusters of lesser chips is another strategy. Compared to traditional GPU utilization, clusters offer better scalability, cost-effectiveness and distributed workload capabilities. Challenges, however, include data transfer latency, synchronization issues, scalability limits and communication costs.
“The larger the data, the more expensive and difficult communication costs are between non-collocated machines, so more efficient methods/techniques will probably be needed to overcome expensive and challenging communication barriers. A combination of hardware and software is needed for successful implementation,” said Greg Osuri.
The rise of distributed and permissionless networks is emerging as a crucial enabler, empowering organizations to harness the potential of lesser GPUs and increase overall chip utilization.
“To achieve optimization, organizations should consider smaller batch sizes that require less GPU memory, train on a subset of data to debug, utilize pre-trained models as they require less computational resources, and distribute training across multiple GPUs,” Greg Osuri explained. “This allows smaller companies and startups to innovate and make real contributions to the AI boom without complete reliance on the most powerful GPUs.”
Distributed Networks Can Bolster the Tech Landscape
Akash Network’s Greg Osuri envisions that embracing lesser GPUs will foster a more diverse and competitive environment, mitigating concerns related to tech giants dominating the AI landscape. He says that this approach provides a cost-effective, developer-first solution for accessing a wide range of GPUs, allowing smaller players to compete on an equal footing.
“Innovative, decentralized solutions are continuing to emerge, addressing the surge in demand, ensuring equitable GPU access, and fostering innovation in cloud computing and AI model training. By giving permissionless access to compute resources – including Nvidia A100s and H100s – from a range of providers, from independent to hyperscale, these computing platforms are uniquely positioned to mitigate inefficiencies,” he said.
Smaller companies and startups are expected to leverage the shift towards lesser GPUs to make meaningful contributions to the AI domain. Examples, such as Thumper.ai’s utilization of a cluster of 32 Nvidia A100s, highlight the optimization of underutilized computing power for faster deployment rates.
“By offering a cost-effective, developer-first approach to accessing a wide range of GPUs, from high-performance datacenter chips to consumer models, smaller players will be able to access the same compute as more established companies that have flexibility in their operational expenditures,” Greg Osuri added.
Looking at broader implications, Mr. Osuri foresees a potential paradigm shift in the tech industry. The shift towards less powerful GPUs and decentralized computing could lead to the development of new applications and use cases, extending beyond AI into other technological domains.
“The inherent flexibility of a distributed network could enable independent developers and researchers to experiment with entirely new applications and unlock new ways to develop radically open application architectures,” Akash Network’s Greg Osuri told Metaverse Post. “This ripple effect could lead to the development of more decentralized applications and services across industries, wider sharing of computational resources and knowledge, the “comeback” of crypto and the blockchain, and integration with existing technologies.”
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Victor is a Managing Tech Editor/Writer at Metaverse Post and covers artificial intelligence, crypto, data science, metaverse and cybersecurity within the enterprise realm. He boasts half a decade of media and AI experience working at well-known media outlets such as VentureBeat, DatatechVibe and Analytics India Magazine. Being a Media Mentor at prestigious universities including the Oxford and USC and with a Master's degree in data science and analytics, Victor is deeply committed to staying abreast of emerging trends. He offers readers the latest and most insightful narratives from the Tech and Web3 landscape.
More articlesVictor is a Managing Tech Editor/Writer at Metaverse Post and covers artificial intelligence, crypto, data science, metaverse and cybersecurity within the enterprise realm. He boasts half a decade of media and AI experience working at well-known media outlets such as VentureBeat, DatatechVibe and Analytics India Magazine. Being a Media Mentor at prestigious universities including the Oxford and USC and with a Master's degree in data science and analytics, Victor is deeply committed to staying abreast of emerging trends. He offers readers the latest and most insightful narratives from the Tech and Web3 landscape.