Unveiling the Ethereum Wayback Machine: How Covalent is Preserving the Blockchain’s Past to Power the Future of dApps
In Brief
Jayen Harrill discusses blockchain data infrastructure, Ethereum Wayback Machine, Block-Specimen, and decentralized data availability, highlighting the intricate interplay between blockchain, AI, and the Web3 ecosystem.
In this insightful interview, Jayen Harrill, Marketing Manager at Covalent, provides a deep dive into the innovative world of blockchain data infrastructure. Harrill offers valuable perspectives on the Ethereum Wayback Machine, Block-Specimen, and the future of decentralized data availability. Her expertise sheds light on the intricate interplay between blockchain, AI, and data analysis in the Web3 ecosystem.
Can you share your journey to Web3? What was your first project?
I got into Web3, or crypto, back in 2013. I really got involved a couple of years later, working as a community manager for physical brick-and-mortar crypto spaces in Montreal and then later in Vancouver. I also got involved through some local organizations that sparked my interest in the field.
Later, I got a job at a crypto security company. After working there for a while, I started my own small business, which I still run today. However, my main role now is working for Covalent as their primary marketing person.
Can you explain the concept of Block-Specimen? How do they contribute to the modularity of the network?
The Ethereum Wayback Machine allows for sharded historical state data across a network that decentralizes and preserves historical state in a performant way. Block specimens are concatenated blockchain data, stored block by block (including blobs) on the Ethereum Wayback Machine. These are then accessed by block results producers to structure the data in a generalized fashion for verifiability downstream in various use case pipelines.
Can you elaborate more on how you work with the Ethereum ecosystem? How do you position yourself to become an essential part of it?
We literally preserve historical state data. Dank Sharding, EIP-4484, and other EIPs are continually changing Ethereum in a way that makes it more like a billboard than a database. Ethereum started as a stateful machine that maintained a historical state, allowing you to see changes in state over time. However, for scalability reasons, that has been continually reduced.
When downstream applications like AI need to access this on-chain data, they need a decentralized source. That’s why the Ethereum Wayback Machine is essential for long-term data availability in Ethereum.
Did you face any problems or challenges while working with the Ethereum Wayback Machine?
In terms of challenges, we are really good at making a performant decentralized system. There are many operators in the marketplace now that can run operator nodes, whether they block specimen producers, block result producers, or other types. Finding operators that meet our standards is a challenge, but we’ve done very well so far.
Can you explain how the data reliability system works for you? Why is it crucial for the future of blockchain data infrastructure?
There are two forms of data availability: short-term and long-term. Short-term data availability is similar to blobs, where it’s for arbitrary temporary storage, typically stored for 15 to 30 days, depending on the implementation. This allows for roll-up data or other data that can be submitted as validity proof within that time window.
Long-term data availability involves sharded state data to maintain a historical state over a long period. Organizations can access this data through a structured API, which is our sister company called Goldrush.dev. Long-term modular data infrastructure is how various organizations and use cases will access this long-term data availability.
How does Covalent’s infrastructure support the development of decentralized AI applications?
We propose modular data infrastructure for AI, which is a pipeline of data infrastructure that allows for training of state data through structuring. The block result producer can be asked to structure the data in a way that’s available for training. You can also create an inference RAG pipeline with a query node operator that references multiple AIs to ask questions and get answers.
We expect companies and organizations to run this infrastructure, and it progressively decentralizes AI models. More importantly, it’s good for horizontal models of AI, which we believe are the future.
How do you foresee the integration of AI and blockchain, and also AI and blockchain data analysis?
If everything is on-chain data in the future, then everything is public. Being able to do massive amounts of coordination and analysis at a macro scale is important. I foresee a dance between the layer of blockchain data, decentralized infrastructure, and AI in a horizontal fashion, typically called multiple generative AI systems.
We’ll also have a human-in-the-loop system of an economy, or a foam economy, where there are interlocking small economies that we already see in DeFi and crypto today. These will interact with both AI and blockchain in a complex interplay.
Can you explain how Covalent’s data structuring and storage differ from traditional centralized database systems?
Traditional systems typically use RPC data that are just stored and put into an index or not indexed at all. Covalent actually structures the data. It’s an async environment where you have many problems with blockchain data, such as reorgs, uncle blocks, and various other issues. Our system goes through and harmonizes that data. For example, you can have multiple marketplaces with different price feeds, and that needs to be harmonized.
What role do you see Covalent playing in the broader Web3 ecosystem beyond providing data?
Covalent plays a central role in modular data infrastructure, whether it’s for long-term data availability or data infrastructure for AI. More importantly, our system currently has over 225 blockchains as part of its data lake, so everyone in crypto touches Covalent data in some way.
Can you explain how Covalent products support NFT-related data queries and analysis?
Our sister company, Goldrush.dev, has the most performant NFT API, which is best-in-class. It uses structured, verifiable data and includes images, blur hashes, and everything anyone would need for NFT-related queries and analysis.
How is Covalent addressing the challenge of data standardization across different blockchain protocols?
We have the best engineers to handle this challenge. We deeply examine the changes in blockchain ecosystems, specifically Ethereum, and we stay ahead of all the EIPs in order to structure data in a performant way.
How does the Ethereum Wayback Machine contribute to blockchain transparency and accountability?
It stores all of Ethereum’s state data, including blob data and other forms of data. This data is structured and ready for use in a performant way, and it is available for anyone to use, including developers and users. We have funds that use that data for mass analysis. One of the use cases we see is a multi-generative AI system doing advanced financial analysis and automation in crypto markets using this data.
How does the Ethereum Wayback Machine handle smart contract interactions and state changes?
It harmonizes the async environment between different state changes and keeps track of these changes.
What are the limitations on the data that can be retrieved using the Ethereum Wayback Machine?
Currently, there are no limitations. The structuring is generalized, with many forms of structuring available, including inference.
What developments do you anticipate for the Ethereum Wayback Machine and Covalent over the next few years?
The Ethereum Wayback Machine, or at least the EWM proof chain, which will be the heart of the system, will be deployed by the end of this year. Most of the infrastructure is already live. As mainstream adoption increases over the next three years, we expect the use case and demand to go up. It’s a modular data infrastructure, so we expect organizations to run this infrastructure.
What trends do you predict for the future of the blockchain ecosystem?
Decentralized infrastructure and blockchain data have found product-market fit, and people are becoming comfortable with decentralized infrastructure. We’re going to see the proliferation of that, where there is actually generalized computing in a decentralized and verifiable way.
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Victoria is a writer on a variety of technology topics including Web3.0, AI and cryptocurrencies. Her extensive experience allows her to write insightful articles for the wider audience.
More articlesVictoria is a writer on a variety of technology topics including Web3.0, AI and cryptocurrencies. Her extensive experience allows her to write insightful articles for the wider audience.