On Saturday (13 August), Sebastien Guillemot, the chief technology officer (CTO) of blockchain business dcSpark, stated that the L1 blockchain Cardano ($ADA) is among the worst blockchains for storing data and went on to explain why.
If you’re wondering what dcSpark has done, its development team says its primary objectives are to:
- “Extend Blockchain Protocol Layers”
- “Implement First-Class Ecosystem Tooling”
- “Develop and Release User-Facing Apps”
In April 2021, Nicolas Arqueros, Sebastien Guillemot, and Robert Kornacki co-founded the company. In the Cardano community, dcSpark is known for its sidechain project Milkomeda.
One Cardano supporter tweeted on Friday (12 August) that Cardano is an excellent blockchain for storing massive volumes of on-chain data.
One of the most interesting innovations that Cardano brings with the EUTXO model is how data can now be stored on chain which means it can never be destroyed. We are testing it right now in Ethiopia to store schooling documentation but the possibilities are endless.
— Lucid (@LucidCiC) August 12, 2022
The CTO of dcSpark said that Cardano’s present architecture makes it one of the worst blockchains for data storage.
“Really strange tweet. Cardano is definitely one of the worst blockchains for storing data and this was an explicit design decision to avoid blockchain bloat and it’s the root cause of many design decisions like plutus data 64-byte chunks, off-chain pool & token registry, etc…
Vasil improve this with inline datums, but they are indirectly discouraged because of the large cost of using them. I do agree that having the blockchain provide data availability is an important feature, but having a good solution will require changes to the existing protocol.“
Then, another $ADA holder questioned Guillemot if this design decision may make it more difficult for teams developing roll-up solutions (such as Orbis), to which Guillemot responded:
“Yes, trying to provide data availability for use cases like rollups, mithril, input endorsers and other similar data-heavy use-cases while keeping the L1 slim (unlike Ethereum which optimizes for people just dumping data) is one of the large technical challenges being tackled“
Charles Hoskinson, co-founder and chief executive officer of IOG, published a video on 1 August in which he explained why the Vasil hard fork had been postponed for a second time and offered an update on the testing of the Vasil protocol upgrade. Hoskinson said:
“Originally, we planned to have the hard fork with 1.35, and that’s what we shipped to the testnet. The testnet was hard forked under it. And then a lot of testing, both internal and community, were underway. A collection of bugs were found: three separate bugs that resulted in three new versions of the software. And now, we have 1.35.3, which looks like it is going to be the version that will survive the hard fork and upgrade to Vasil.
“There’s a big retrospective that will be done. The long short is that the ECDSA primitives and amongst a few other things are not quite where they need to be. And so, that feature has to be put aside, but all of the remaining features, CIP 31, 32, 33, 40 and other such things are pretty good.
“So those are in advanced stages of testing, and then a lot of downstream components have to be tested, like DB Sync and the serialisation library, and these other things. And that’s currently underway. And a lot of testing is underway. As I mentioned before, this is the most complicated upgrade to Cardano in its history because it includes both changes to the programming language Plutus plus changes to the consensus protocol and a litany of other things, and was a very loaded release. It had a lot in it, and as a result, it’s one that everybody had a vested interest in thoroughly testing.
“The problem is that every time something is discovered, you have to fix that, but then you have to verify the fix and go back through the entire testing pipeline. So you get to a situation where you’re feature-complete, but then you have to test and when you test, you may discover something, and then you have to repair that. And then you have to go back through the entire testing pipeline. So this is what causes release delays…
“I was really hoping to get it out in July, but you can’t do it when you have a bug, especially one that is involved with consensus or serialisation or related to a particular issue with transactions. Just have to clear it, and that’s just the way it goes. All things considered though, things are moving in the right direction, steadily and systematically…
“The set of things that could go wrong have gotten so small, and now we’re kind of in the final stages of of testing in that respect. So unless anything new is discovered, I don’t anticipate that we’ll have any further delays, and it’s just getting people upgraded…
“And hopefully, we should have some positive news soon as we get deeper into August. And the other side of it is that no issues been discovered with pipelining, no issues of been discovered with CIP 31, 32, 33 or 40 throughout this entire process, which is very positive news as well, and given that they’ve been repeatedly tested internally and externally by developers QA firms and our engineers, that means there’s a pretty good probability that those features are bulletproof and tight. So just some edge cases to resolve, and hopefully we’ll be able to come with a mid-month update with more news.“