Hack Seasons Interview Business Markets Software Technology
October 21, 2024

Breaking the AI Monopoly: How Theoriq’s Modular Architecture is Paving the Way for a More Democratic and Innovative Tech Landscape

In Brief

Ron Bodkin, CEO of Theoriq, discusses the future of decentralized AI, centralization challenges, and ethical considerations in the Web3 ecosystem.

Breaking the AI Monopoly: How Theoriq's Modular Architecture is Paving the Way for a More Democratic and Innovative Tech Landscape

In this interview, Ron Bodkin, Co-founder and CEO of Theoriq, shares his journey into Web3 and AI, offering a unique perspective on the future of decentralized AI. With his extensive background in AI and startup successes, Ron dives into Theoriq’s innovative approach to building modular and composable AI agents, the challenges of centralization in the AI market, and the ethical considerations surrounding AI development in the Web3 ecosystem.

Can you please share your journey to Web3? 

I’ve been interested in Web3 for a few years. A good friend of mine, Jeremy Miller, was a long-time member of the leadership team at ConsenSys. I came in and helped advise Joe Lubin and the team on startup success back in the early days. I connected with some of the projects and was always really interested. However, back in 2018-2019, when I was having some of those conversations, I felt it was a little immature, and my focus had been for a long time on AI.

Fast forward to late 2021, and some of my old colleagues reached out to me. I had sold an AI startup, ThinkBig Analytics, to Teradata. The founders of Space and Time, Nate Holiday and Scott Dykstra, had contacted me about advising them on ZK-proven analytics for Web3. I was really impressed by how much the space had matured and how we were starting to see support for serious infrastructure projects building on Web3 ideals and leveraging decentralization primitives.

I also got a chance to connect with David Post, who at the time was leading corporate development at Chainlink. He was very encouraging about the ideas Nate and Scott were building. For me, it was incredibly important because I had long believed that AI would be the most transformative technology of my lifetime. My experience working at Google convinced me that we can’t rely on a few monopolists or billionaires to control AI for the future. Even with good intentions, the outcome of big tech monopolies is not a good one.

I was passionate about coming up with a better way of contributing to and building AI. That led us to start Theoriq. My co-founders at Theoriq were also veterans in building and scaling AI. Our product lead and co-founder, David Mueller, had been a long-time crypto enthusiast who invested and was involved in projects. So he was super excited when I reached out, as were our CTO, Arnaud, and our Research Lead, Ethan.

How does a modular and composable approach enhance the flexibility and scalability of AI agent development compared to more monolithic AI systems?

We believe that there’s a big change in how AI will be used. Agents are the future, building more autonomous software with specialized capabilities to do reasoning, access real-time data, and use tools like writing code, accessing APIs, or querying databases. We believe modularity and composability are incredibly important so people can build more specialized agents that do certain things well and can come together dynamically in collectives or teams, just like people do.

A lot of people are working on agents, but most Web 2 efforts are anything but modular and composable. They’re building tightly coupled systems where a few agents work together in a specialized way, but they can only work together. That will never scale. The genius of smart contracts has been this permissionless composability. You don’t need to carefully integrate to call another smart contract; you can just call it as long as you have permission.

Things like public key cryptography allow agents to verify calls from one another without having been pre-integrated. This is a huge advantage over the Web2 approach, which requires establishing a shared secret or access key, inherently limiting scalability. The same goes for payments between agents. Instead of having to establish a credit card payment system every time agents need to interact, using on-chain payments makes it frictionless for agents to easily reconcile and pay one another.

We’ve put a lot of emphasis on things like trading agents. We can tap into data that organizations like Masa, who are gathering from social media, and Space and Time, who are gathering on indexing blockchain data. Agents can easily access that data and combine it. We have different agents who are specialists in analyzing data or querying Twitter and accessing that information. We’re also now partnering with organizations that have built intent agents that can take natural language and then act on it, for example, trade based on users’ interests and research.

These things can all come together in a nice permissionless way. We’re even making it so end users who aren’t programmers can create agents with our no-code builder and put those different pieces together to teach it something they know.

How does Theoriq’s approach differ from generalist AI models like ChatGPT?

We really believe that we’re moving to a world where you’re going to have more specialized AI systems or agents. ChatGPT isn’t just a model; it’s becoming more agentic itself, but it is generalist. It’s a one-size-fits-all, jack-of-all-trades. It tries to do everything, and there’s room for that. It will certainly be used for systems like Perplexity or Google’s AI-enhanced search.

However, we think that the more specialized your needs, the more useful it is to have a tool that an expert has put together that’s good at the very specific things you want. With the ability to create agents using a no-code builder, it’s going to be better for people who want to do something repeatedly to just build an agent to do that thing for them instead of spoon-feeding a chatbot, correcting it, and iterating.

For example, if I do a lot of marketing, I can teach an agent how I like to evaluate a campaign and how to generate and test messages. I can have specialized agents doing those things repeatedly. Or if I have a strategy involving advice from different key opinion leaders (KOLs) for crypto trading, I can build a collective with input from these KOLs and get real-time advice for my decisions.

These specialized tools are way more powerful than trying to use a generic tool where you have to do a ton of legwork to find information, paste it into a window, try to get it to generate code, and maybe execute that code somewhere else.

How do you think we can fight centralization in the AI market? Do you think it’s possible to solve this problem of centralization in the next five years?

There are a few elements to it. A really important place to start the fight, certainly for us, is at the agent layer. It’s a big difference if you have open standards and open protocols where you can use any framework. Almost all of the successful frameworks for developing single agents are open source, like LangChain, Llama Index, and Crew AI. We’ve built one called Counsel.

What we don’t want to see happen is, for example, OpenAI taking their lead in the market and making it so the only way to build agents is to use their proprietary APIs and data with restricted access. So, a starting point of making it really open and interoperable, where you can use any model, is important.

Open-weight models are also a really important counterweight to commercial models. While open-weight models like Llama 2 or Qwen are not as good as the very best commercial models, they’re within a year of parity. For many use cases, that still presents an attractive trade-off because you have more control and more competition from deployment providers and inference providers, making it optimized and more effective to run.

We’re already seeing an interesting mix between open models and commercial models, and I think there’ll be more innovation around that. We’re certainly fans of efforts like Sentient and Near who are really pushing to build truly open models rather than the Web2 version of open models, which typically has some kind of expiration date on how long it will really be open.

We do think that community-led, decentralized, open model training is the hardest thing to achieve. At the same time, I think we can make big progress on preserving choice and freedom and make sure that we’re not locked into a world where only one vendor controls the way AI agents are built.

What advantages does Theoriq provide for developers from different industries, such as finance and gaming?

We have designed DRF to be a lightweight protocol that can serve a variety of cases. Finance and gaming certainly have different needs, different data sets, different latencies, and different levels of criticality. You probably have a much lower tolerance for executing a failed trade than for an agent that might glitch in a game.

In gaming, there’s so much innovation happening as people are starting to reconceive games with amazing AI agents. The whole idea of an NPC was really a trade-off of what could be automated to bring some non-human play into a game when non-human play was quite limited. AI is just blowing the doors off of that nowadays. There are a lot of games where the player wants to be the leader of a whole team, and now having that team be really intelligent team members and guiding AIs becomes a really exciting possibility.

In finance, some of the use cases include making it easy for people to get the most up-to-date information, simplify trading, and even automate trading strategies, as well as consulting from multiple experts in real-time, getting advice, and taking advantage of opportunities.

While there are differences in requirements, there are still a lot of foundational elements that are similar, such as agents being able to own things, transact, and operate in near real-time. You’re going to see differences in things like the value of a transaction in gaming tends to be much lower than in finance, so you can accept perhaps more latency and more credit risk for less frequent on-chain payment and interaction.

In all cases, agent reputation and quality become incredibly important, especially in a space where things are moving so fast. The way we handle feedback on agents includes human rating and AI evaluators that assess other agents. We’re also going to add staking to make it easy for people to identify the next great agent for a use case and to benefit by front-running and finding what a great agent is to provide that signal.

Theoriq mentioned a no-code builder for AI creators. Does this mean that anyone can build AI agents, or are there limitations?

We’re excited to release our first version of our no-code builder next month. It’s going to give access to a variety of social data, the ability to identify and crawl sites, answer questions from that data, and have context from past information. We continue to see the opportunity to keep expanding the capabilities of the no-code builder so that if you know something, you can teach an AI agent how to do it.

Our goal is to make it as rich as possible. There’s always going to be more expressivity if you can write custom code, and increasingly, AI agents themselves can help you write that code. But we want to make it as capable as possible to build really rich, powerful agents without requiring programming skills.

Our guiding principle is that you shouldn’t have to do programming in disguise. There are a lot of no-code approaches to AI agents that involve graphically laying out a program, which isn’t easier than programming. Even programmers prefer to write in code rather than program graphically. We believe that AI is smart enough to allow you to have conversations and share information to get good outcomes without needing to program explicitly.

In prediction markets, if we use AI agents to predict prices or other processes, can we actually trust the results because they were made by AI? 

Trying to get really good predictions, whether for pricing or anything else, is a reasonably hard problem. Creating tools that help people explore and get insight, letting them make the right decision for themselves, is a big win. It dramatically lowers the bar in getting access to good information that fits with your assumptions and understanding.

Where there’s an efficient market, you should expect that it’s going to be hard for an AI agent or any other tool to give you significant improvement. We believe that AI agents can help a lot with thinly traded markets and less studied topics. They can help people much more quickly make good predictions. Often, in a person’s job, there are a lot of things you might want to predict, and you don’t have the time to do all the research. So, putting together a solid prediction on that is a huge win.

Studies have shown that AI agents are actually competitive with human super forecasters. So, they’re quite good at predicting and able to roughly reach the state of the art. But I feel like the big win here is not an arms race of what’s the best AI predictions. I think it’s more interesting that it democratizes access. You don’t need to be a super forecaster or work in a hedge fund with a large research team to get access to this kind of technology.

How does Theoriq handle the ethical challenges surrounding AI development?

I think it’s important to have ethical rights. AI is going to be powerful and transformative, and we have to balance the risks of centralization and decentralization. It’s incredibly dangerous to have a few monopolies control a technology that is transformative. We’ve already seen that even with social media, just the way a few companies can affect information flow in a democracy is a massive responsibility.

I don’t think we can trust big tech to develop AI responsibly. At the end of the day, will we really see a decision to stop when risky things are being seen? OpenAI themselves think their latest model, GPT-4, is a medium risk for helping develop weapons of mass destruction (chemical, biological, radiological, nuclear). They’ve stated that they don’t know how to reduce that risk. When they get to high risk, where their stated policy is that they will not release that model, will they actually comply with that as competitors like Meta potentially move forward?

In the decentralized world, we need to be able to detect abuse, block it, and create incentives for agents to behave responsibly and pro-socially, not rewarding selfish agents that create bad outcomes. We have to have strong coordination. This is also one of the strengths of Web3 decentralization – we can have modern coordination technology that is more impactful than the coordination technology Web2 tends to rely on, which is government regulation.

We do need government regulation and government action, but if we’re waiting for the government to catch up, we’re in a lot of trouble. Can we do better? I had a good conversation with Sriram from Eigenlayer, and he’s passionate about using innovations in coordination technology from Web3 to allow better outcomes in AI. I think that is an important direction. These are hard problems, and we need to take them seriously.

What future trends in AI and Web3 do you foresee? Does Theoriq have any plans to implement new tools for these trends?

Things are going to continue to move very quickly. We believe that the power of AI agent collectives will be such that by the end of the decade, a majority of knowledge work will be done by AI agents in collectives. To deliver on that, there’s so much work to do on refining, evaluating, and discovering agents, providing robust signals from the community with staking, and creating a great way of building an ecosystem so there are more incentives for people to build and earn by creating the next great agent.

We’re developing the no-code builder we talked about, building that flywheel so the open ecosystem is the most successful. This leads to all these great partners we have, whether it be open model developers like Sentient and Near, open deployment and model inference providers like Nosana, Hyperbolic, and Akash, folks like 0G that are providing scalable serving and compute infrastructure storage, or data providers like Masa, Graph, Coin Network, and Space and Time.

We see the ecosystem growing with agent builders like SphereOne, Ember, and Quill AI. Another trend we foresee is that the modality of interaction with AI is going to keep shifting. Right now, you tend to have fairly frequent interactions with agents because they aren’t that autonomous and need to check in pretty often. Over time, we think agents are going to have a longer-lived interface. It’s more like you might collaborate on documents with them, or they might be working in a codebase, and you’ll message back and forth, even have meetings.

The style of interaction with agents will move from working for a few minutes together to weeks- and months-long projects where agents collaborate with you and participate on an ongoing basis. I believe that the power of the Web3 ecosystem for AI all coming together is really special, and the Theoriq platform is well-positioned to support these developments.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Victoria is a writer on a variety of technology topics including Web3.0, AI and cryptocurrencies. Her extensive experience allows her to write insightful articles for the wider audience.

More articles
Victoria d'Este
Victoria d'Este

Victoria is a writer on a variety of technology topics including Web3.0, AI and cryptocurrencies. Her extensive experience allows her to write insightful articles for the wider audience.

Hot Stories

Missed Bitcoin’s Rise? Here’s What You Should Know

by Victoria d'Este
December 20, 2024
Join Our Newsletter.
Latest News

From Ripple to The Big Green DAO: How Cryptocurrency Projects Contribute to Charity

Let's explore initiatives harnessing the potential of digital currencies for charitable causes.

Know More

AlphaFold 3, Med-Gemini, and others: The Way AI Transforms Healthcare in 2024

AI manifests in various ways in healthcare, from uncovering new genetic correlations to empowering robotic surgical systems ...

Know More
Read More
Read more
Transak Increases Accessibility To Memecoins By Listing 11 New Tokens
Markets News Report Technology
Transak Increases Accessibility To Memecoins By Listing 11 New Tokens
December 20, 2024
Missed Bitcoin’s Rise? Here’s What You Should Know
Opinion Business Markets Technology
Missed Bitcoin’s Rise? Here’s What You Should Know
December 20, 2024
The Explosive Rise of Crypto Theft in 2024 with North Korea Leading the Charge
Opinion Business Markets Software Technology
The Explosive Rise of Crypto Theft in 2024 with North Korea Leading the Charge
December 20, 2024
Multiple Network Unveils Brand Upgrade, Focusing On Privacy Protection And Data Acceleration 
News Report Technology
Multiple Network Unveils Brand Upgrade, Focusing On Privacy Protection And Data Acceleration 
December 20, 2024