The Future of AI Computation: Nosana’s Visionary Approach to Bridging Web2 and Web3 Through Decentralized GPU Power
In Brief
Jesse Eisses, co-founder of Nosana, discusses the company’s innovative use of consumer GPUs for AI tasks, aiming to create a sustainable and accessible computing infrastructure.
In this conversation, Jesse Eisses, Co-founder of Nosana, shares insights into Nosana’s innovative approach to harnessing consumer GPUs for AI tasks. He discusses the challenges, strategies, and vision behind creating a more sustainable and accessible computing infrastructure that could reshape the future of the Internet.
Can you share your journey to Web3?
I’ve been on Web3 for quite some time. I studied artificial intelligence at university around ten years ago and was already involved in Web3 for a few years back then, mainly out of technical interest. I was looking into Bitcoin, and the concept of transferring value without needing banks was extremely interesting to me.
After university, I was keen to combine Web3 and artificial intelligence, which were my two biggest passions. So, I started a company called Effect AI, which was about making datasets for AI training. We rewarded people who contributed data by paying them in cryptocurrencies. I worked on that for a long time.
Over the years, I’ve worked in many different blockchain ecosystems, like Ethereum, Binance Smart Chain, and Solana. It’s been quite a long ride. About three years ago, I founded Nosana with a co-founder I knew from university, Sjoerd. Nosana is focused on computing resources in the Web3 space.
The idea for Nosana came from the fact that we thought it was painful that there are so many cool decentralized products being created, but all of them are still being run on hardware owned by three big companies – the big cloud providers. They’re basically providing their infrastructure to run all of the Internet, Web2 and Web3 together. We wanted to make that foundational computational resource more decentralized.
In what ways does Nosana’s use of idle consumer GPUs contribute to environmental sustainability in the AI computing sector?
I think it has a really big positive impact on the environment. What Nosana does is enable GPU computing from consumer hardware. Instead of using cloud providers like AWS that have huge factories filled with specially produced hardware connected in massive city-sized data centers, Nosana looks at the hardware that people already own at home.
We’re looking at gamers, miners, and different hardware owners who have these devices, and the majority of them are not being fully utilized. A lot of gamers buy a gaming PC and play for maybe a week, four hours a day, and then often these devices end up just catching dust on the shelves. But they’re already produced and very capable of running AI inference.
By utilizing hardware that’s already out there in people’s homes, we don’t need to manufacture more devices. Another part of this is that gamer devices are, by accident, actually very efficient at running AI inference. They’re more efficient than the devices in clouds. Cloud-manufactured devices are not as good as gamer devices for this specific task. So, it also means we are just more efficient than our cloud counterparts, which makes this a more sustainable and scalable model for the environment.
On top of that, there are just way more GPUs out there. The number of GPUs owned by gamers far outweighs the number of GPUs available in data centers.
When integrating this diverse range of consumer GPUs into one cohesive network? What challenges do you face?
There are a lot of challenges. We’re definitely making it harder for ourselves by tackling the problem this way. If you have a lot of GPUs in a data center, you have a very controlled, convenient environment. The power is specifically created to sustain that amount of hardware. They’re all interconnected with cables to ensure really high throughput and bandwidth.
Our environment consists of all these different households that have one or two GPUs scattered across the globe. They have different types of internet connectivity, power supplies, and surrounding hardware like CPUs and RAM. So it’s a much less controlled environment with many more variants. Our challenge is to make this as reliable and performant as data center GPUs.
We think we can get there. Currently, the GPUs that Nosana delivers are extremely performant. We achieve this by identifying and benchmarking individual devices. We’re able to classify which GPUs have which kind of bandwidth available for streaming data in and out. Using data analytics and matching with the correct client needs, we’re able to provide the best possible performance. But making that whole system of consumer devices reliable and performant is the challenge we’re tackling.
How do you attract new users and companies to use your services?
There are two sides to our product. On one side, we’re looking for GPU suppliers, and on the other side, we’re looking for companies to do AI inference on our grid.
On the GPU side, we’re working with a lot of gamers. The Nosana community is really big, and we have many inspired people who are excited to contribute their hardware. We have a surplus of GPUs currently connected to the network that are just out of interest. Also, they’re making a return on investment, so they’re motivated from a financial perspective as well.
The more interesting part is getting companies to start using this technology. This is the trickier part, and I think it’s where Web3, in general, is suffering right now – adoption from traditional companies. The majority of companies running artificial intelligence are Web2-based, not in crypto. There’s a smaller portion of companies already into Web3, and they’re very excited to use decentralized compute marketplaces. These are the main clients currently running workloads on our grids.
To target that larger pool of Web2 companies, we need to build a level of trust. The idea of inference running in someone else’s home on a personal device is scary for them. So we have to educate them on the security aspects and give them interfaces they’re familiar with. It’s a long battle that you win step by step.
We’re lucky that there’s such a big shortage of GPUs on the market right now. Companies are actively looking for alternative solutions to clouds because clouds are too expensive or not available. This urgent need allows us to make a big step into working with traditional clients because they desperately need this to run their businesses.
Can you elaborate on how Nosana is working to integrate advanced GPU technologies like Tensor Cores or Ray Tracing capabilities into your offerings for specialized AI tasks?
Nosana is very much a network that enables consumer hardware. When you look at these new GPU technologies like Tensor Cores and Ray Tracing, a lot of these technologies are available in consumer GPUs. So, we are actively working on getting those devices on our network. Even mobile devices are getting a lot of new technology for doing AI computations.
Our network is specifically very good at having different technologies join and classifying them to make them available for companies to use. At the moment, we’re just using NVIDIA cards for AI inference, but we are looking down the line to see these new technologies coming out and have a place for them in the Nosana compute market.
What strategies do you use or plan to use to expand beyond NVIDIA to other GPUs, such as AMD, Intel, or Apple Silicon GPUs?
This is an interesting one. I’m a big fan of AMD; their chips are affordable and really performant. They just have other issues on the software level and driver level where they’re not so reliable. I think this is going to change at some point.
Right now, NVIDIA is the king. There’s a lot of demand for NVIDIA cards but not much demand for AMD cards, Apple Silicon, or other GPU types. That’s because the whole industry is using NVIDIA and CUDA and the tooling available that NVIDIA provided to run their artificial intelligence applications.
Our strategy is to go where the market is going. NVIDIA is number one, and that’s the main device we offer. We do realize that if NVIDIA gets defeated at some point by a better GPU manufacturer or a different model, we will definitely be ready for that as well. Nosana isn’t inherently linked to NVIDIA.
I hope we’re going to see a more competitive AMD, and I hope people will start to use machine learning frameworks that are able to run on Apple Silicon, maybe even in distributed settings. For a little while, we were running a demo of our application that used Apple Silicon and could be run in the browser. These things are really cool and work very well. It’s just that there isn’t a lot of demand in the market for that right now.
Our strategy is to keep a close pulse on market needs and make sure we’re able to supply the hardware that people need.
How do you plan to address the potential centralization risk that could arise if large-scale GPU farms begin to dominate its network?
This is a cool thing to think about, but it’s not something we actively anticipate now. One of Nosana’s powerful elements is that we’re very decentralized. The power we have from gamer GPUs that are fragmented around the world makes our network powerful.
We do have a few smaller data center-type GPU suppliers. But in the end, they’re a bit more expensive because they have more costs to upkeep. The performance we’re getting from the 4090s, which aren’t available in data centers due to licensing, is very good.
I don’t think there’s an imminent risk of centralization. It would be interesting if a big data center came on and was cheaper than consumer GPUs, but I’m pretty confident that consumer hardware is always going to be cheaper. As long as it’s performant, people will prefer to run on those markets instead of the data centers.
Do you have any plans to integrate with existing AI development workflows and tools to provide a seamless experience for your users?
Yes, we intend to integrate with some of these frameworks. Accessing Nosana right now involves using our SDK, which is currently in TypeScript. We’ll probably have an SDK in Python at some point that will make integration easier with frameworks.
We’re developer-focused, and we spend a lot of time within AI developer communities to see what people are using to build. That’s where we want to provide the tools they find convenient to use the network. The tooling side and integration side are things we’re going to double down on in the coming months. We’re just in a phase right now where we have our initial set of clients using our preliminary tools, but to grow our client base, that’s definitely what we’re going to be looking at.
How does Nosana’s approach to decentralized GPU computing align with the broader trends in Web3 and the decentralized internet?
I think what Nosana does is take the term decentralization very seriously. We’re completely decentralized and open-source. I’m not sure if this is a trend in Web3 right now. I think that a lot of companies are basically closed platforms with a token for paying people.
But we’re really going towards the decentralized internet and the true Web3 approach where we want to make sure our network is completely open source and open for anyone to take part in. Freedom is one of the core values and elements within the Nosana network, which means that anyone can run our software, compile our software, make modifications, and anyone can start a compute market and any GPUs can join. There is no central entity in charge of what happens on the Nosana network.
We take that very seriously, and I hope that’s the trend. I think there are two sides in the industry, and Nosana is definitely on the open source side, following the Bitcoin model of decentralization.
How do you see the development of the industry you’re working in over the next three years?
I think the industry of decentralized computing is very interesting. The main motivation for me to start Nosana was that the cloud providers are way too powerful. There are very few companies running about 70% of the whole internet, Web2 and Web3, and I think that needs to change.
In the next three years, looking at the projects that are here right now and the traction we’re getting with Nosana, and that I’m seeing in other projects like Akash, I think we’re going to chip away at that 70% dominance of the three cloud providers. We’re going to start hosting more things on decentralized computing. Within three years, I think it’s realistic that we’ll take a chunk of centralized computing and be able to run that on decentralized GPUs.
When you look at the AI space specifically, I think a lot can happen in three years. We’ve been in an acceleration phase since the launch of LLaMA and ChatGPT. But three years is also a short amount of time, so I think we’re going to see more powerful AIs and maybe slightly different architectures of models. I do think that the hardware requirements and what Nosana is offering won’t change that much. It’s mainly going to be about growth and making sure that we capture more of the Web2 market, make the three big cloud providers a bit less powerful, and give people more freedom in what they run their technology on.
Can you share Nosana’s roadmap? What do you plan to do in the next month or two?
We just announced that our last phase of testing is coming to a close. Nosana has been in a stage called test grid, which means we have a working GPU grid, but we’re still testing out some core functionalities. So, we were not ready for the open public and were only working with a selected group of clients. This phase will last until December.
From January 14, we’re launching our main grid, which means that the compute grid of Nosana will finally be open for everyone to use. Any GPU can come on board, and any client can use this powerful compute grid from that date onwards. It’s an exciting moment for us, and I think a lot of people have been waiting for a long time for that to happen. We consider it stable enough to be used by the wider industry.
From there, a lot of Nosana’s efforts will go into driving the adoption of the network as it’s open to be used. We will focus a lot on developer communities and tooling integrations. We’re going to see many cool and fun events being organized by us. We’re aiming for a global hackathon sometime later in 2025, and we have other very cool things in mind to drive that adoption forward.
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Victoria is a writer on a variety of technology topics including Web3.0, AI and cryptocurrencies. Her extensive experience allows her to write insightful articles for the wider audience.
More articlesVictoria is a writer on a variety of technology topics including Web3.0, AI and cryptocurrencies. Her extensive experience allows her to write insightful articles for the wider audience.