Interview Business Markets Software Technology
July 29, 2024

Unleashing the Power of Open Source: GaiaNet’s Vision for a Democratized AI Landscape

In Brief

Matt Wright, CEO and Co-Founder of GaiaNet discusses the company’s mission to democratize AI development and create a transparent ecosystem for AI agents and models.

Matt Wright, CEO and Co-Founder of GaiaNet discusses the company's mission to democratize AI development and create a transparent ecosystem for AI agents and models.

In this interview, Matt Wright, CEO & Co-Founder of GaiaNet, shares a unique perspective on the intersection of decentralized technologies and artificial intelligence. He discusses GaiaNet.AI’s mission to democratize AI development and create a more open, transparent ecosystem for AI agents and models.

Can you share your journey to Web3? How did you start, and what was your first project?

I used to organize hackathons with a small company called AngelHack. We organized developer events in 65 different countries around the world. I was traveling extensively for these events across Asia Pacific, Mainland China, Latin America, and North America.

In 2016, I organized a hackathon for Barclays, the bank, with the theme of blockchain. This event in New York was the first time I spoke to engineers about what they were building on a blockchain and understanding the purpose from an engineering perspective. I had purchased some Bitcoin prior but wasn’t familiar with the developer side of things. That was when I first jumped into the rabbit hole.

I got to hear about creating transparent and open democratic systems in open source and creating this future Internet of Value. For me, that was phenomenal. I eventually left that role and got picked up by JPMorgan. I was working on Quorum, which was an open source fork of Ethereum. I helped grow that community of enterprise engineers about 5X while I was there.

We eventually exited that project to ConsenSys in 2020. At ConsenSys, I led various DevRel initiatives for their software, including leading the DAO ecosystem. I also built an accelerator program called ConsenSys Fellowship. That’s been my journey so far. 

How GaiaNet differentiates itself from other AI solutions on the market? 

You have centralized versions, decentralized or open-source versions of AI, and then open models and closed models. Within Web3, our differentiator is that we are purely open source infrastructure. We’re not a SaaS. There are projects like BitTensor, which is looking to build more towards their own chain. And we have projects like Morpheus, which is building its token economy.

We believe in full open source and composability in three different ways: decentralization on the infrastructure side and token economics. From the infrastructure side, you can go to our GitHub, clone down our AI agents, and run it straight from your local machine. You can run it in the cloud, on GPU, or CPU. It’s really up to you.

We’re enabling folks to take open source LLMs from Hugging Face and build their own agentic workflows and agents and nodes through this ecosystem. We help you package them together in under two minutes. As a developer you can deploy your own node in two minutes from the command line. In the future, it will be a low-code solution where people can do this even quicker. They don’t need to use our systems, we don’t have access to their data. This is truly open-source software.

What is the difference between decentralized and centralized AI? Is the AI industry currently centralized or decentralized?

I see it in a few ways. One is the availability of large language models – the openness of these models and the ability to use them for whatever purpose you want. We call that censorship resistance. This is a very popular concept in Ethereum and blockchain. These LLMs should not be censored, which leads to a profound philosophical question for humanity about what they think is right or wrong.

Another aspect is governance. Who’s able to decide how we use these technologies? Who’s able to limit how we use these technologies? In centralized AI, the governance of these systems is left in the hands of a very small group of people. It’s strictly monitored by large governments and small agencies.

However, if you’re building your own AI model trained on your own data, whether you’re an enterprise, an individual, or a creator, it makes sense to understand how to govern that system. We’re taking many of the same principles we have seen in Ethereum around an open Internet of value and applying them to AI.

There’s a specific group of people in the world who believe that these systems should be open and censorship-resistant and that there should be economics enabling people to come in from the community and contribute and get paid for their data. But there are others who think these systems should be decided upon by a small group of people. That’s not how we feel about things.

What do you think are the main challenges that the decentralized AI industry is facing right now?

We see the same problems that Ethereum had in its early stages. Ethereum started by really trying hard for decentralization and worked year after year to achieve performance. In the beginning, Ethereum wasn’t touting how fast blockchain was. It was really about the number of developers who believed in the movement of decentralization or decentralized Internet money and financial services.

We’re really focused on helping developers get access to these tools and ensuring that people have sovereign, living knowledge systems that they can govern over their lifespan. The challenge with all that is, of course, performance. I still use ChatGPT for a lot of my day-to-day work processes, and I think I always will. There are really cool ways of using AI in a centralized world.

I don’t think you have to pick one. There are different use cases for why you would choose to use something decentralized versus something centralized. But it’s going to take a long time until decentralized AI looks similar to centralized AI for the end user.

How do you support integration with already existing AI agent applications?

Right now, our developer approach is to find projects that have rich proprietary data, indexed data, whether on-chain or off-chain. We want to help enable that to be more human-readable through an agent model. We’re looking for folks who are already testing out AI, like using OpenAI’s API for different processes and their workloads.

We’re hoping they could use our APIs and essentially replace a centralized model that is probably not very cost-effective and use something that’s not only cheaper but also can help them monetize. Suppose they’re able to take their own data and package it in a node, and people can interact with it and are willing to pay to interact with it. In that case, we’re actually flipping the script and enabling developers to make money in this ecosystem.

We’re looking for projects that are open-minded, are already experimenting with AI, but are really interested in monetizing the data and creating unique services for their developer communities.

Can you elaborate on how GaiaNet’s OpenAI-compatible API benefits developers and businesses in detail?

We take open-source LLMs from Hugging Face – there are 600,000 of them. You can decide which model fits your use case. We use Llama 3 for a lot of our work; it’s very performant and context-specific if you’re not training on the entire internet.

We help enable fine-tuned RAG models with your own proprietary data. We’re able to do that because we have a vectorization tool that helps you turn your text files or markdown files into embeddings that host on a vector DB. Then we have an application runtime called Wasm Edge where this entire instance is hosted, and performance is guaranteed across different platforms.

There’s an open playground where developers can come in and say they want to offer a Web3 payment API or integration. Because it’s fully open source, they could propose this to us on GitHub. We’re hosting biweekly core developer calls to think through what cool features folks are looking for.

When you deploy a node itself, we have API tooling where you have an API endpoint you can share with whatever use case you find interesting or any kind of infrastructure. So there are some really cool ways where you can take your data, package it into a node. We have most of the tools you need to run your own node or agent.

How does GaiaNet support the customization of AI agents with domain-specific knowledge?

We’re looking for folks who have proprietary data. If you’re a creator, an academic, a developer, or an institution, and you have access to a lot of data that brings context to a specific subject or domain matter, we can basically take that, put it into a JSON file, and train your node to operate with that knowledge base. Over time, this node would become more and more fine-tuned to your preferences and to what your community is looking for. We have the tools and infrastructure for you to do it yourself.

Talking about ethics and responsible usage of AI, what measures are in place to prevent misuse or abuse of AI agents?

It’s a tough question. In centralized AI, the biggest pushback is the need for regulation or governance controls of these systems, especially when it comes to AGI or the theory of future AGI. There are lawsuits going on, like the New York Times suing OpenAI because these models were trained on their newspaper data for years.

In centralized AI, you trust a small group of humans to make these decisions, which gets really sticky and complicated. I think it makes sense that people are nervous about what could be possible. It is scary – robots and machines could learn everything we do as humans and decide that we’re no longer viable.

But what’s cool about Web3 is that with smart contract governance, from a lot of the innovation in DAOs, there are really cool future innovations where you can program in the communities and the ecosystems and the stakeholders that would basically decide how these technologies are used or funded.

How can traditional banks, like JP Morgan, utilize AI agents?

Oh, they’re already using AI. I was talking to bankers who are using it in-house. They’ve been using it for years. It’s just been called machine learning for a long time. They’re automating any bureaucratic processes. Within a bank, there are so many paper trails of moving one spreadsheet to another person and getting approval from 10 different people across the bank to sign off on some transaction or organizational restructuring.

An AI agent could be an automated bank teller or, on the internal side, a copilot that answers basic HR questions. They already have that kind of stuff in-house. Using something like Gaia would probably not be in their best interest right away in terms of monetizing that data. But what’s cool about Gaia is that they could take open-source code and use it.

We want developers to adopt these open technologies that just work out of the box. They don’t even have to use Web3 if they don’t want to. But then we want to encourage enterprises, creators, and developers to take the data they have that’s packaged in a node and monetize it through Web3 rails. I don’t know if the banks are going to be ready to go down that route, but I do know they are already using AI for internal processes and automating a lot of those bureaucratic processes.

What types of specialized AI agents are currently most popular on the Gaia platform?

Right now, we have trained a couple of our own. We have one that’s trained on Vitalik’s blogs, so it’s answering questions like it’s Vitalik. I think that’s cool, but it’s gimmicky. What will be really interesting is if you train it on all of Vitalik’s coding, his Git history on GitHub. 

You could program it where anyone who pairs programs or learns the code with the Vitalik node could mint some NFT representing that Vitalik was their digital co-founder. Then, any money they make from the project could automatically go to public goods through some sort of smart contract agreement.

Right now, we’re quite limited to just training off very basic text files and markdown files. The cool one I’m playing with right now is an integration with Obsidian. You can take your Obsidian notes, and you can basically run your own model trained on your notes to finish certain thoughts on Obsidian or create new writings. You can organize your current writings. We just did this integration about two weeks ago.

What are your plans for developing multimodal AI agents that can handle text, images, and audio?

In a couple of weeks, we’ll have the API endpoints for video, text, and audio. In the same way you use a lot of these OpenAI APIs, you can train your models with our nodes and have a very similar experience. It’s going to be pretty new as we’re trying it as we go. But we’re looking to work with the community to figure out how to make it more performant and target responses that people are looking for.

What is GaiaNet’s vision for the future of decentralized AI?

We see a world of living knowledge systems. We’d love to see more projects, individuals, and institutions look to AI, understand they have a unique repository of data, and help them identify a way they can either connect to a community, build a new kind of vertical business, or find some way to take that data and make it more powerful.

We’re hoping that we can help any company or creator become an AI company, creator, or developer. We’re not saying that we’re going to build this mega platform; we’re saying that if you are exploring AI and don’t want to contribute your data to a centralized platform like OpenAI, we’ll enable you to do this yourself. We’ll give you the tools and guidance to build your own infrastructure. You keep your own data, and you host your own data.

Over time, we’re hoping that there will be marketplaces of millions of nodes that have fine-tuned RAG models of all kinds of different shapes, sizes, and forms. We hope folks are actually monetizing that and working inside of a very modular ecosystem where they can use different integrations from node to node or clusters of nodes in a domain.

We want the whole world to adopt the technology, and we hope it’s used in the best of ways, but we want it to be openly accessible and available, censorship resistant, transparent in terms of how economics are set, and a composable environment where you can build integrations that make sense for your community.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Victoria is a writer on a variety of technology topics including Web3.0, AI and cryptocurrencies. Her extensive experience allows her to write insightful articles for the wider audience.

More articles
Victoria d'Este
Victoria d'Este

Victoria is a writer on a variety of technology topics including Web3.0, AI and cryptocurrencies. Her extensive experience allows her to write insightful articles for the wider audience.

Hot Stories
Join Our Newsletter.
Latest News

From Ripple to The Big Green DAO: How Cryptocurrency Projects Contribute to Charity

Let's explore initiatives harnessing the potential of digital currencies for charitable causes.

Know More

AlphaFold 3, Med-Gemini, and others: The Way AI Transforms Healthcare in 2024

AI manifests in various ways in healthcare, from uncovering new genetic correlations to empowering robotic surgical systems ...

Know More
Read More
Read more
Covalent’s EWM Light Client Secures Ethereum’s Future By Preserving Historical Data
News Report Technology
Covalent’s EWM Light Client Secures Ethereum’s Future By Preserving Historical Data
October 18, 2024
In a Game-Changing Move, ParallelAI Teams Up with io.net to Elevate AI Processing Capabilities and Redefine Performance Standards
Opinion Business Markets Software Technology
In a Game-Changing Move, ParallelAI Teams Up with io.net to Elevate AI Processing Capabilities and Redefine Performance Standards
October 18, 2024
Aptos Celebrates Two Years Of Mainnet, Shares Plans For Tether And USDY Integration 
News Report Technology
Aptos Celebrates Two Years Of Mainnet, Shares Plans For Tether And USDY Integration 
October 18, 2024
Wukong’s ‘Zero-Transfer Airdrop Campaign’ To Reward First 100,000 Participants 
News Report Technology
Wukong’s ‘Zero-Transfer Airdrop Campaign’ To Reward First 100,000 Participants 
October 18, 2024