edicted Blog Banner

edicted

Hive is failing at failing: to scale gracefully over time.

# 1 hour


7 min


3 min

These videos all have the same theme; the theme that brought Graphene and DPOS into this world in the first place.

The scaling issue.

In 2017 this is all anyone could talk about.
$50 Bitcoin fees have that effect on people.

So the main idea here is that we shouldn't be fretting about scaling so much because everyone has been doing that since the beginning of the Internet. I still remember when everyone was saying 56k modems were the absolute limit and we'd never get faster Internet connections than that (5-10 KB per second). Today I can download hundreds of megabytes per second. If the Internet gets much faster it will no longer be the bottleneck and I'll have to get a faster computer just to keep up.


Transferring pictures online used to be an unheard of (almost offensive) concept. Then it was audio, then video, then high-definition video, then streaming high-definition video.

assciporn2.png So hot right now.

As soon as more space becomes available for scaling, be it the Internet, Bitcoin, or whatever else, those resources will immediately be consumed by the engineers and innovators that know what to do with it.

So how is Hive failing?

When's the last time you heard about this network having a scaling issue? When's the last time full nodes complained about being overburdened with requests or the last time we had to increase resource credit costs to limit how much users could post to the blockchain? I'll wait.

The last time I remember having a scaling issue was when I first got here in 2017. During high traffic times the witnesses were limiting the bandwidth of users and you couldn't transact on the blockchain unless you had enough powered up stake to do so. This was before the resource credit system even went into effect, so we haven't even had a chance to test the limits of our scaling ability with the cool new system we (Steemit) invented.

Isn't that a good thing?

No...

Quoting myself:

those resources will immediately be consumed by the engineers and innovators that know what to do with it.

This is the problem; we DON'T know what to do with our resources. No one is running out of resource credits. Blocks are not filling up. Full nodes are not being overloaded with requests.

We don't know our limits because we aren't testing them.

I remember when the RC system went into effect. Do you? We all had negative resource credits and no one could transact on the blockchain because the hardfork retroactively acted like we had been using the system from the genesis block.

More importantly, how we solved that clusterfuck was most alarming. We basically cut the cost of transacting on the blockchain by a factor of 10 so that users with less RCs would still be able to post to the blockchain. This leads me to believe that the RC system is totally broken (or at least extremely uncalibrated) and we don't even know it.

Imagine implementing RC pools. This would basically allow all those whales to start using their credits, whereas today those credits are either being unused or simply used to purchase account tokens to be redeemed at a later date. Allowing whales to use their RCs for real is going to dump massive inflation onto the system, and many consequences could develop:

  1. The RC costs to transact could go sky high because whales are actually spending them. There's a good chance we'd have to x10 the cost of everything to reverse what we did back in the day. This would give RCs value and whales would be selling theirs for a profit.
  2. Sooner or later, blocks will start filling up and we'll have to discuss raising the blocksize limit. This is exactly why Bitcoin forked to Bitcoin Cash. Again, if blocks start filling up we have to raise RC costs and cut the little guys out of the equation unless they have access to an RC pool.
  3. Second layer and off-chain solutions start being developed to migrate data that doesn't necessarily need to be on chain in the first place. The more RCs are worth the more financially incentivized we as a network become to remove data from the chain if it doesn't absolutely need to be there.

None of these things are happening because no one is utilizing this chain to its full potential.

It worries me that we are going to hit the next mega-bullrun and be wholly unprepared just like we were last time along with the rest of the cryptosphere. We'll all hit that scaling wall, servers will start melting, and the space will be declared dead again as we trudge into the next bear market. Perhaps that's just the way of things.

https://peakd.com/hive-139531/@blocktrades/misconceptions-about-2nd-layer-apps-part-1

I find the timing of this article by @blocktrades of particular relevance. Essentially he is describing a dapplication that exists directly on the blockchain. We need more of those if we are going to test our limits. We should all thank Spliterlands for migrating over here and essentially posting operations on what seems like every block. Nice work!

facemaskshistoryofmasqueradecaliforniacovid19S.jpeg

Masquerade

We have a lot of "dapps" here that claim to be decentralized, but are they really? Booting up a centralized server that you control and then connecting it to Hive with a thin strand does not make it decentralized. Just because you're piggybacking off of Hive's account security and currency doesn't mean you've created any kind of trustless, permissionless, or borderless system.

Is this a bad thing? Not necessarily. It's probably the only reason why this network hasn't melted down due to lack of scaling yet. If everything was on chain... 65KB every 3 seconds is not enough space to run it all. That's a fact.

Hive is not good at scaling just because it scales better than Bitcoin and Ethereum.

The implications of distributed ledger technology are that it's an inefficient database who's data is copied and verified thousands of times across the globe to ensure consensus and trust in the network. Hive is not different.

Imagine we have 10k daily users (we do). That's fine. Now imagine 100k, 1M, and 10M. Did you know Myspace still has over 10M users? Yeah, we can't even scale as large as laughable Myspace. Think about how difficult it would be to scale up huge servers when those huge servers have to be copied 1000 times across the world by every other person running a node. Truly, we have to carefully pick and choose our level of decentralization to remain relevant in the space.

Conclusion

It seems like a silly thing to complain about: failing to fail. However, some of the greatest lessons are taught in the wake of failure. Better to learn those lessons sooner rather than in the middle of a mega-bullrun when shit is really hitting the fan. There's not much I can do about it except continue on trying to develop actual decentralized dapps with provable ownership and the inevitable inefficiency that comes along with it. If anything, it would be nice if more people were aware of this "non-issue". In the end we'll likely just have to wait around until we have enough devs to really test the limits of this place.


Return from Hive is failing at failing: to scale gracefully over time. to edicted's Web3 Blog

Hive is failing at failing: to scale gracefully over time. was published on and last updated on 29 Aug 2020.