edicted Blog Banner

edicted

Hive Scaling RC Burnout: Open Letter to @taskmaster4450

redlineoverhead.jpg

You remember that time that you wrote a comment and it turned into a damn essay? This is that comment.

Never done this silly "open letter" thing before. Let's see if I get the gist of it.

https://peakd.com/hive-167922/@taskmaster4450/hive-a-resource-credit-crunch-in-2021-2022


With all due respect, I feel like the way resource credits and Hive bottlenecks are described here is somewhat misleading. Hive coins and Hive Power have no affect on our throughput. Resource Credits are a completely separate resource from everything, and even they do not determine the throughput, volume, or limitations of the network. RCs simply exist as a way to limit accounts from filling up the blocks.

At the core of this issue, the only thing that matters is how much information we can store on a single block (the blocksize limit, if you will). It is the same limitation that both Bitcoin and Ethereum have. Hive is no different, except that we pay 20 nodes a lot of money instead of spreading that out to a ton of miners. We centralized our chain so that we could ensure high throughput. This is the classic scaling issue. Do you want more decentralization or more scaling? Sacrifice one for the other.


overheatredline.jpg

Hive blocks are currently 65KB MAX for the entire network.
https://peakd.com/steemdev/@edicted/65-kb-max-length-post-keywords-javascript-steem-api You'll recall that I filled an entire block myself just to test it.

In this context, it's much easier to see how we could run out of room and fail to scale at any moment. We can only save a MAXIMUM of 22KB per second (1.8GB/day) to the database. This leads to the exact issue that caused Bitcoin and Bitcoin Cash to fork:

Why don't we just raise the blocksize?
Easy.

Well, it's not that easy, because Hive node operators are already complaining about how expensive everything is, even though we are constantly developing scaling solutions like MIRA and the like. Imagine what it would be like if our blocks were actually being filled to the brim and 1.8GB of raw data was being added to the Hive blockchain every day.

For each bit of raw block data added to the chain, much more indexes and fast-access data and tables are stored in expensive ram on the nodes. Even if we move that data to SSD storage, it becomes much slower and there is still a cost to keep it there.

poolresourcesliquidity.jpg

Which is why we have resource credits: to stop the blocks from filling up. However, if the blocks fill up anyway we have to raise RC costs. If RC costs go up then we have to implement RC pools. If RC pools are implemented then all the RCs that whales and orcas were wasting will start being used, increasing RC costs even more exponentially via sudden massive hyperinflation.

Once this happens one of two things will occur:

  1. Users will stop using Hive because it is too expensive.
  2. Users value Hive enough to pay the cost, raising the value of RCs and Hive.
    • This allows nodes to afford bigger blocksize limit, or as @blocktrades has teased: 1 block per second.

The #1 option will not happen until Hive price is bubbled, so we are nowhere near that moment. I think Hive would have to be trading above $10 a coin at that point before users were no longer willing to buy more in order to compete for RCs.

Also obviously this happenstance will incentivize the entire network to find more scaling solutions. I believe you were remiss not to include the dapp that sucks down the most bandwidth by a huge margin:

Splinterlands

https://hiveblocks.com/

Any casual check of the blockchain at any moment will show that Splinterlands custom JSONs are on every single block in huge numbers. It's very obvious that this can not continue forever. If we had other dapps like Splinterlands on chain the blocks would already be fully tapped out. At some point or another Splinterlands will be way too expensive to play directly on-chain and they'll have to move the bulk of these transactions to second layer or simply move them to the private server where they will not be transparent.

Conclusion

If I'm being honest here I don't think I've "schooled" @taskmaster4450 with this comment in the least bit. I wouldn't be surprised to find out that he learned nothing from what I've written here. When it comes to Hive we both know quite a bit.

I am also not trying to invalidate what @taskmaster4450 has written. I view his method of explanation (in general) as a gateway to the masses. My approach is littered with technicalities that even the people who follow me and have been on Hive for years struggle with what I write sometimes. I'll talk about servers and nodes and API and code snippets and javascript, mySQL, Chrome extensions, node.js, blah blah blah blah blah. Bro, what are you talking about? Stop talking nonsense.

So what's more valuable? Being right or being understood? Does being technically correct matter when the end-message is the same and nobody understands what you're talking about?

To quote myself:

Never done this silly "open letter" thing before. Let's see if I get the gist of it.

"The gist of it..."

Exactly. Sometimes that's the most important thing to convey. The substance of these messages are the same: real-estate on the main chain is super valuable, and the network isn't going to realize that until we run out, which could happen at any time.

On that note have you heard that @blocktrades and @theycallmedan may be airdropping tokens based on Hive stake? Two possible airdrops and an RC shortage... hm... things could get pretty crazy around here if you ask me.

Posted Using LeoFinance Beta


Return from Hive Scaling RC Burnout: Open Letter to @taskmaster4450 to edicted's Web3 Blog

Hive Scaling RC Burnout: Open Letter to @taskmaster4450 was published on and last updated on 09 Nov 2020.