Blockstack community thoughts on block size / Bitcoin XT debate

What are everyone’s views on the Bitcoin block size debate? Specifically, how does block size affect projects like blockstore & Blockchain ID?

My general opinion on the matter is “data or STFU.” The motivation for this debate is to help Bitcoin scale up in both the transaction rate and transaction volume. Will making the blocks bigger actually address this in a consequence-free manner, or will it just push the problem down the road, and result in more blockchain bloat and fewer full nodes along the way? I don’t know–as far as I know, there’s no empirical data on how well an XT network performs compared to a Bitcoin network under similar conditions. Until I see data, I’m calling BS.

Part of the reason I’m skeptical is because distributed systems typically scale by growing sub-linearly with the scaling parameter. For example, DHT nodes do not store every single piece of data, or even routes to every single node; they each store O(k/n) data and O(log k) routes for n nodes and k records. As another example, Internet routers do not store routes to every single publicly-routable host; they assign route prefixes to their neighboring routers, and send packets along the interface with the longest IP prefix match (leading to O(log n) expected number of routing hops for n hosts, and O(1) memory per router).

This does not appear to be the case with Bitcoin XT. Bitcoin XT’s might be able to do better than Bitcoin by a constant factor. But, the XT approach is like trying to scale a small computer network by buying a bigger switch and making everyone’s ARP table bigger. The system might bear a little bit more load for a proportional resource investment (and probably with unintended consequences), but it won’t solve the underlying scalability limitation (feature?) that everyone still needs to mine the same blocks, no matter how big they get or how often they are added.

So, I need to see some extraordinary results from the XT proposal before I believe that it solves the scalability problem, especially if the proposed solution is to do better by a constant factor.

EDIT: clarity

3 Likes

To answer your other question, Bitcoin vs Bitcoin XT has no bearing on Blockstore and Blockchain ID. The only thing we need is the 40-byte OP_RETURN payload.

1 Like

Assuming blocks reach the current 1mb limit, does block size affect cost of registering and/or updating a name?

Because of Bitcoin XT I decided to actually spend some time learning about the block size debate. What I learned made me spend the past day and a half railing against it online.

TLDR: Stay away from Bitcoin XT. It is very bad for Bitcoin.

3 Likes

Would another analogy be to say increasing the block size is like trying to scale a small computer network where each host is increasing its data usage and the number of hosts are increasing by swapping out a 100 megabit hub with 8 ports for a 1 gigabit hub with 24 ports? (recall a hub unlike a switch relays the same data to each port - much like the whole network mines the same blocks)

I’m of the opinion that network vulnerabilities/attacks are one of the least exploited vector in Bitcoin currently. Increasing the blocksize is something that needs to be done very very carefully. With very large blocks, suddenly it’s not just about your hashing power but your network bandwidth/latency also starts mattering a lot. Unlike hashing power, it’s much harder to upgrade your network link. Meaning that this puts certain miners/parties at a significant advantage.

I’m all for scalability planning, but that should imply lots of experimentation and technical discussions. The XT fork and current proposal for blocksize increase seems aggressive and untested to me.

With that said, the XT fork however brings up something that is even more interesting/important than the blocksize debate: governance of bitcoin development. The fact that XT posts were getting deleted by r/bitcoin moderators is just unacceptable. Also, there is a clear need for a better structure/process for deciding how the bitcoin protocol/software evolves.

6 Likes

Speculation. Also, the limit doubles every two years which is consistent with trends in technological progress.

Those claims are false and have already been disproven.

There is no wrong or right way; forking is an established means of choosing “exit” over “voice” when participants feel that “voice” is no longer constructive.

Fallacious arguments from authority.

That’s one perspective, the other of which is that Core development has stalled as a result of analysis paralysis and the inability to come to full consensus on this issue. The issue has become more ideological than technical at this point - coming to a full consensus on any ideological issue within a diverse group is practically impossible.

As for questions of if BIP101 / Bitcoin XT solve any scaling issues within Bitcoin - they do not. But neither will any of the actual scaling proposals work effectively without larger blocks. I outlined why this is the case in this post.

1 Like

Not really true for network bandwidth and for memory. Largely true for computing and storage. Larger blocksize is more dependent on network/memory than disk/computing.

I agree with you on this. Also, the debate getting political instead of technical is also a big concern.

People have done simulations of this but it’s hard to see what will really happen until it’s tested in the wild.

I think it’s natural to conclude that the number of full nodes will decrease, but we might just have to live with that. For scaling, the alternatives to increasing the block size are to hook the blockchain up to off-chain systems (like the lightning network or sidechains), which would arguably lead to more centralization, as fewer people would be directly on the bitcoin blockchain, fees would be really high due to high demand for block space, and only the transaction aggregators could afford such an expensive, low bandwidth system, which makes it a clearing system only.

Yes, we’re in a bit of a predicament with the blockchain because everyone has to have a copy of the data :confused:

For data notarization (what we’re doing with Blockchain ID) we do have the ability to merkle-tree pack operations, but this leads to it’s own set of centralization concerns.

I don’t think Gavin or Mike purport that the 8MB XT modification is meant to solve the scalability problem once and for all, but rather that it is meant to at least play a part in the solution. Let’s be honest, we’re going to need other solutions in addition to any block size increase that we go with. We probably will need to use some off chain systems like Lightning or side chains. But we should at least do our best to keep the price of transactions down by increasing the supply of blockchain space.

Right now, if we assume that every block is filled to the 1MB brim and we break this down to a mean of 2000 transactions per block at 500 bytes each, then that means the blockchain can handle 288,000 transactions per day and 105 million per year.

If 100% of transactions were used to register blockchain IDs, and each user required on average 4 transactions to be onboard-ed and 2 transactions per year to maintain their account, then we’d be able to maintain a max user base of 50 million people.

Now assuming we can constitute 25% of transactions, that leads us to a maximum of 12.5 million users. A blocksize increase of 8x would allow us to have 100M, while 80x over a period of 10 years would allow us to hit 1B users.

1 Like

Hi Larry,

Would another analogy be to say increasing the block size is like trying to scale a small computer network where each host is increasing its data usage and the number of hosts are increasing by swapping out a 100 megabit hub with 8 ports for a 1 gigabit hub with 24 ports? (recall a hub unlike a switch relays the same data to each port - much like the whole network mines the same blocks)

Yeah, a hub would be a closer analogy. But in both cases, each host’s ARP table needs to grow to accommodate each other host (growing linearly with the number of hosts), and the network capacity needs to grow to accommodate O(n^2) traffic for n hosts in the form of ARP requests/replies. There’s a reason the Internet does not try to behave like a big virtual hub or a big virtual switch :smile:

But, the point I was trying to make is that the XT proposal does not try to address this limitation at all. The factor that limits a node’s ability to mine blocks is network bandwidth, because to mine block B, the node will not only need to have fetched all B-1 blocks, but also will need to be able to gather enough candidate transactions for B before it has a chance of solving B’s crypto-puzzle. The transaction volume when B is being mined puts a lower bound on the amount of bandwidth the node must dedicate towards the Bitcoin network at that time–if a node can’t meet this lower bound, it can’t mine B. Making B bigger only increases the upper bound on the transaction volume when B is mined. It does nothing to make the lower bound lower, but can have the unintended side-effect of allowing the transaction volume for B to grow beyond the ability of existing nodes’ network links to handle.

History has shown that increasing the capacity of a system by a linear factor (e.g. bigger block size, more network bandwidth) rarely has the long-term effect of making the system faster. This is how Wirth’s Law came to be :smile:. I have a sinking feeling that even if everyone switched to bigger blocksizes and upgraded their bandwidth to accommodate them, we would be back to square one in no time, since we’d just find some other use for any resulting extra capacity.

1 Like

Yes–the cost of registering and updating would be pegged to the cost of a transaction on whatever blockchain the system uses. I was pointing out that the correctness of the naming and storage protocols are unaffected (provided we get 40 bytes per transaction to play with).

Agreed. Just read the post you linked to - https://medium.com/@lopp/de-centralized-block-chain-scaling-268dc5c3a7d0 - and it makes a lot of sense.

Agreed; bandwidth I believe is more like 17% per year. I think the important thing to note is that you can’t predict the future with an algorithm, however by having a hard cap that is slightly higher than optimal is better than a hard cap that is far lower than can meet market demand. Because a hard cap can be easily decreased with a soft fork or simply by miners voluntarily lowering their configurable soft cap.

What a block size increase means for Bitcoin:

  • potentially lower fees for individual transactions in the short term (but no guarantee of this)
  • potentially more transactions per block (also no guarantee; miners can always impose a soft limit)
  • potentially greater centralization of mining around better-connected, more competitive miners
  • potentially more difficulty running a full node due to increased resource requirements

I, for one, look forward to seeing more transactions fit into each block. This will be great for everyone using bitcoin, regardless of use-case, including those registering blockchain IDs. I therefore support a reasonable block size increase i.e. one that does not dramatically hurt node + miner decentralization. I personally prefer Pieter Wuille’s proposal, but would also support Jeff Garzik’s BIP 102 proposal (though Jeff’s is only a one-time increase, and I think the block size should continually increase with global network bandwidth capacity). Bitcoin XT, I’m not so sold on, but I welcome the debate that their proposal has brought to the fore.

[quote=“lopp, post:8, topic:152”]“It would lead to a centralization of the network due to its exponentially increasing block size limit.”

Speculation.[/quote]

That is a fact, not speculation. To deny that is to be disingenuous, and it may be one of the reasons why posts in support of XT are being censored from /r/Bitcoin (not that I think they’re right in censoring them, but I understand why they feel like they have to when you repeatedly make bs statements like this).

The only way it is speculation is if you are assuming we are entering a world where everyone has precisely equal bandwidth.

Also, the limit doubles every two years which is consistent with trends in technological progress.

Again, misleading. Doubling every two years is consistent only up to a point, and then it stops.

There is no wrong or right way

Yes there is. This is computer science. There are right and wrong answers.

Fallacious arguments from authority.

There’s nothing fallacious about their reasoning, and if you know anything about their background you’d know that they’ve not only demonstrated a superior understanding of the subject, but that we wouldn’t be having this conversation if it weren’t for them.

The issue has become more ideological than technical at this point -
coming to a full consensus on any ideological issue within a diverse
group is practically impossible.

That is a completely separate issue from the fact that Bitcoin XT is harmful garbage.

Takeaway

  1. No one is against increasing the block size. So stop implying they are.
  2. People are against XT and other non-answers to the scaling question. [1] [2]

You seem to be assuming that only current node operators continue to operate nodes while those with the least bandwidth drop out. Whereas it seems to me that by supporting more users on-blockchain we’ll have more enterprises and enthusiasts who will join the system and run nodes.

Unfortunately it’s not pure computer science. This particular issue is highly ideological. Even recent posts on the development mailing list agree on that. http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-August/010463.html

1 Like

That’s a mixture of truth and misleading statements. It is precisely that kind of dialogue that trips so many people up and gets them in trouble.

The only way to deal with it is to take it apart piece by piece:

  • “You seem to be assuming that only current node operators continue to operate nodes”

No.

  • “while those with the least bandwidth drop out.”

Yes.

  • “we’ll have more enterprises”

The percentage of enterprises will likely increase relative to enthusiasts, because only large for-profit enterprises will be able to handle the bandwidth.

That does not mean we will have more enterprises.

  • “and enthusiasts who will join the system and run nodes.”

No. Enthusiasts will not be able to handle the bandwidth. Period.

Yes, this is ideological in the sense of whether you are for or against decentralization.

Once you choose between PayPal and Bitcoin the politics stop and the computer science is all that remains relevant.

Looks like I missed that comment. Citation required. I provided mine.

My well connected nodes with over 100 peers use < 10 KB/S downstream and < 150 KB/S upstream on average: https://statoshi.info/dashboard/db/bandwidth-usage

I could easily handle an order of magnitude increase in bandwidth requirements without breaking a sweat on my current residential connection. And next month my residential connection will increase its speeds by an order of magnitude while costing me an extra $10 per month. So I don’t think that the extreme claim that all enthusiasts will be priced out of running nodes holds true.

I’ve posed the question on the dev list in the past: “what is the consensus for an appropriate minimum specification for running a node?” But this doesn’t seem to be a topic that many people are interested in exploring, yet it is often used as an argument for keeping the block size limit at 1 MB.

Small block proponents may think that larger blocks will centralize the network by pricing out individuals from running nodes. Large block proponents may think that larger blocks will decentralize the ecosystem by increasing the number of participants and thus the number of nodes. I think that if demand continues to increase while block sizes remain static, it will centralize the system by forcing more users to transact off-chain through trusted parties. But all of those positions are speculative.

http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-August/010384.html

The claims are invalid; the code doesn’t run in that state.