How OpenLedger Turns Data Contributors into AI Stakeholders


In Brief
OpenLedger, a decentralized platform, is reshaping AI’s economics, optimizing GPU usage, and fostering a circular data economy, putting end users at the center of innovation.

An ex-Google DeepMind team, Stanford doctors, and even a project decoding Solidity’s complexities are all building their AI models on OpenLedger. It’s a decentralized platform designed to reward data contributors and lower barriers for developers. In this interview, Ramkumar, Core Contributor at OpenLedger, shares how the company is reshaping AI’s economics, optimizing GPU usage, and building a circular data economy that puts end users back at the center of innovation.
For those who don’t know you yet, can you tell us a bit about your background and what led you to start OpenLedger?
We started in this space back in 2017 as a blockchain R&D company. My team and I worked with several enterprises, including Walmart, Cadbury, Viacom, and the LA Times, implementing blockchain and machine learning solutions. We quickly realized there was a huge demand for real technology solutions in blockchain—not just launching tokens and ICOs, but building actual infrastructure and R&D.
At our peak, we generated around $40 million in revenue with more than 200 employees, but most of that was service-oriented work. We wanted to build something more long-term—a product with a broader use case. That’s how OpenLedger began. We saw the need for better data handling and specialized AI models, beyond what generic models like ChatGPT, Llama, and others could do.
We developed OpenLedger as a protocol where people can contribute data, build specialized AI models, and verifiably prove that their data was used so they can get rewarded. We built this concept on research from a paper called DataInf, which focuses on data attribution—tracking how each dataset influences a model’s outputs. Using that, we created a mechanism to reward data contributors proportionally to the impact their data has on a model’s inferences.
We raised $15 million, launched a testnet, and in the last six months have had around 20 projects building on us. Now we’re preparing to launch our mainnet, with enterprises and Web3 companies already using our platform.
How can OpenLedger compete or collaborate alongside centralized AI giants?
It’s not really about decentralization versus centralization—they solve different problems. AI has three core elements: data, models, and compute. Two years ago, compute was the bottleneck, and decentralized compute providers helped cut costs dramatically, offering GPU power for a fraction of centralized providers’ prices.
Now, the bottleneck is data. To build truly useful AI models, we need unique, high-quality datasets. Centralized companies struggle to source and monetize this data. Decentralization solves this by allowing people globally to contribute data, prove ownership, and get rewarded—directly, via crypto, without regulatory hurdles.
We also use blockchain for verifiability, so contributors know exactly when and how their data is used and why they’re being rewarded. This transparency and fairness is something centralized companies can’t match, and it’s why we’ve built OpenLedger as a decentralized platform.
How does the Module Context Protocol (MCP) change the way AI agents are developed and interact with data and tools?
MCP allows models to interact directly with apps and third-party data sources. Our goal is to integrate MCP so that models built on OpenLedger can easily access various external datasets, from on-chain repositories to major oracles. This integration will make our models more powerful and efficient, with seamless access to data streams they need for specialized use cases.
How does OpenLedger optimize GPU usage so multiple models can run cost-effectively on a single device?
We developed a proprietary technology called OpenLoRa, which lets up to 100 models run on a single GPU. Most specialized models are fine-tuned versions of a large base model, like Llama. We modularize the models: the base model stays on the GPU, while fine-tuned datasets are added as lightweight adapters.
When a query comes in—say, about healthcare—we route it through the base model, attach the healthcare adapter in real time, and generate a response, all on one GPU. This approach can reduce deployment costs from thousands of dollars a month to around $100, dramatically lowering barriers for model developers.
What types of applications are currently being built on OpenLedger, and which are you most excited about?
There are several exciting projects. An ex-Google DeepMind team is building a model with us. There’s a team creating a Web3-specific model, another working on a DeepMind dataset called Ambios, and a group of Stanford doctors developing a sleep-related healthcare model.
We also have teams building hyper-local mapping models and a Solidity-focused model, which is unique because most code generation models don’t understand Solidity well. All these models will launch as we go live on mainnet, and we’re very excited to see them in action.
How does OpenLedger change the economics of AI?
Right now, AI is a one-way street—end users contribute data but don’t get rewarded. We want to make AI a two-way street where data contributors earn ongoing rewards whenever their data powers a model’s output.
We’re also lowering the barrier for building AI models by providing affordable GPU access, revenue-sharing models for data, and tools that make AI development accessible even for people with only basic programming knowledge. Our goal is to democratize AI so more people can build, contribute, and benefit from it.
Does OpenLedger aim to become the default execution layer for AI? What does success look like for you by 2030?
Success isn’t about sheer numbers of models. Even if we only enable 100 impactful models powering 10 real-world applications, that would be a win. Our goal is to build a sustainable, circular economy: contributors provide data, models generate revenue, contributors get paid, and better data flows back into the system.
AI evolves quickly, so our focus is on adaptability and building tools that stay relevant as the ecosystem changes.
With mainnet and the TGE on the horizon, what should the community expect in the coming months? What will that unlock for contributors, builders, and the broader ecosystem?
We’re going live on mainnet within a couple of weeks. Our token will launch shortly after, enabling rewards for data contributors and access to the platform for developers. The community can expect to see live governance features, ecosystem growth, and more opportunities to engage directly with our tools and models.
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Victoria is a writer on a variety of technology topics including Web3.0, AI and cryptocurrencies. Her extensive experience allows her to write insightful articles for the wider audience.
More articles

Victoria is a writer on a variety of technology topics including Web3.0, AI and cryptocurrencies. Her extensive experience allows her to write insightful articles for the wider audience.