Framework for Stacks Scalability

Recently, I’ve seen questions around Stacks scalability and potential optimizations that can be made to network capacity. The scalability properties of Stacks and design tradeoffs are unique and it’s worth understanding them to put various pieces together.

Let’s jump in!

First, there are two categories of design decisions any blockchain network needs to consider:

  1. Hardware and cost assumptions for nodes: it makes a big difference if a network is designed for the average users i.e., people with average network bandwidths and off-the-shelf computers (like laptops) or designed for data center nodes.

  2. Consensus algorithms: different consensus algorithms have different trade-offs. Whenever you are gaining something (e.g., instant finality) you are gaining it at the cost of something else (e.g., ability to route around failures).


For hardware requirements and the cost of running a node, it’s really a decentralization question. You can optimize for (a) average users with laptops, (b) higher-cost hardware but still something users can run at home, or (c ) data centers only (no home nodes can be possible).

An example of (c ) is Dfinity, where you can’t be a miner unless you operate a data center node with bandwidth connections that meet a minimum criteria (which home nodes cannot meet). An example of (b) is Solana where the hardware requirements are quite costly and they’re already moving towards category (c ).

Stacks is designed for category (a) i.e., any user can run both a Stacks miner and a Stacks full-node simply on a normal laptop at home.

This is a decentralization decision.

Stacks is designed to maximize decentralization (like Bitcoin). That does not mean that faster transaction or high-throughput cannot happen with Stacks but it’ll likely come in the form of subnets (explained below) or app-chains (another scalability proposal).

The general model here is to scale in layers around Bitcoin. Stacks itself can be thought of as a smart contract layer for Bitcoin, and Stacks can then have subnets that make different decentralization/throughput tradeoffs than the Stacks main chain.


In terms of consensus algorithms, there are a few tradeoffs that impact throughput:

Closed membership vs. open membership: If the group of miners allowed to mine is closed meaning that nodes cannot freely join/leave at will then you can get much higher throughout. An example of this type of consensus is Dfinity or EOS. You score very low on decentralization but then can process blocks faster (because you already know who the miners are and it operates more like a federation of nodes which is a well-studied area in distributed systems).

Stacks is an open membership system (like Bitcoin). Anyone can become a miner at any Stacks block. For subnets you can relax this requirement a bit (e.g., miners can be selected in advance for a series of blocks) to get higher throughout. The important thing to understand is that for maintaining decentralization it’s critical that the Stacks main chain is open membership.

Instant finality vs. Nakamoto consensus: Some newer L1 PoS chains offer instant finality. The key thing to understand about instant finality is that you get it at the expense of the ability to route around failures (amongst other things), which in the end is a decentralization concern.

Stacks main chain follows a Nakamoto-style consensus, meaning that similar to Bitcoin independent miners can fork around any catastrophic failures without requiring hard forks. Subnets can pick different tradeoffs but it is critical that Stacks main chain is able to automatically route around failures without manual intervention for hard forks.

Current Network Capacity:

With this general framework in mind, let’s look at the network capacity of the Stacks main chain.

Just by looking at the choice to optimize for average nodes (like laptops) we know that Stacks main chain is not a “datacenter chain”. The throughput for Stacks main chain is roughly going to be somewhere between Bitcoin and Ethereum. Ethereum often gets criticized for reliance on infura. Most ETH 1.0 nodes run in the cloud. People can try this on their own: if you run geth on your home network, it’ll eat all your available bandwidth. The lesson from Ethereum is that you want to have explicit scalability solutions instead of putting so much load on the main chain that it can no longer be run by average home nodes.

Stacks main chain can theoretically do approx 1.67M simple transfer operations (for STX) in a day. These STX transfers are highly optimized and fairly simple from a compute cost perspective, so they can serve as an indication of current theoretical max throughput for simple operations (main chain).

For comparison Ethereum today does approx 1.2M transactions per day. However, these include contract calls which are more complex (not just simple transfers).

To understand the real throughput of the Stacks network, you need to think in terms of capacity bottlenecks. There are two limits that you typically hit:

  1. Runtime costs i.e., compute costs.
  2. I/O limits i.e., reading and writing data.

Stacks developers did an analysis of current network traffic and approx 68% of recent traffic was bound by compute cost and and 31% by I/O. The key thing here is to not think of the system’s capacity in terms of transactions, since the transaction’s size has nothing to do with the compute resources it uses. A contract-call can easily eat a double-digit percentage of one of the block’s runtime dimensions (think of it like a 100-kilobyte Bitcoin transaction). The important thing here is, if they’re paying for those resources at a higher metered rate than anyone else then it makes sense that these transactions get mined first.

Network Growth:

Since the launch of the Stacks mainnet in January 2021 by independent miners, the Stacks community has seen great growth: there’s dozens of startups going through the Stacks Accelerator, devs are tinkering with all parts of the technology, and new people join our community every day. While this growth is obviously a good thing, the additional traffic is also pressure testing parts of the underlying infrastructure that we all use and independent miners operate, and it is highlighting a number of areas in which improvements are needed.

There are three categories of improvements:

  1. Minor updates
  2. Consensus-breaking changes
  3. Scalability layers

Minor updates are non-consensus breaking meaning that the rules of the system do not change but software upgrades can help improve various things. Such upgrades are part of the normal lifecycle of any software. Stacks is open-source software and anyone can make minor improvements to the software and independent miners and full-node operators can upgrade their software. Any previous version of the software remains compatible with the rules of the system and continues to work on the network.

Consensus-breaking changes can only be deployed by miners following the SIP process. During consensus-breaking changes, new rules can be introduced for the system or previous rules can be modified. Deploying consensus-breaking changes is a well-understood concept in blockchains and Stacks roughly follows the Bitcoin philosophy i.e., deploying consensus-breaking changes is very hard and a high threshold of miners need to support such major upgrades.

Scalability layers are “add ons” that don’t impact the Stacks main chain. They should be thought of as optional additional features that anyone can deploy and use and typically require no input from miners. Both app chains and subnet proposals are scalability layers meaning that they don’t touch consensus and are optional add ons that anyone can deploy and use as they see fit.

Potential Improvements in the Coming Months:

For any scalability improvements, everyone needs to understand that Stacks is a fully decentralized ecosystem. Miners decide what upgrade (minor or major) they want to deploy or not. Hiro for example is not a miner and does not even know who the miners on the network are. So the open-source community will need to self-organize, following the processes laid out in SIPs, to help improve things that they care more about.

My lens on potential improvements is to divide them in the three categories laid above:

  1. In the coming weeks, only minor upgrades (non-consensus breaking) are possible, which can be quite effective nonetheless. For example, new cost estimates can go live through a cost vote (that is allowed by current consensus rules).

  2. In the 3-month timeline, Stacks 2.1 release could help with a bunch of these.

  3. In the 6-month timeline subnets or app chains could provide scalability, which are the more sustainable long-term scalability options anyway.

The Stacks ecosystem is fully open source, and anyone can help with any improvement they care more about. We’re operating in a fully decentralized landscape and it’s important to highlight ways in which this ecosystem is different from others and can be more resilient long-term through decentralization, similar to Bitcoin.

With that context in mind, let’s look at potential things in the short-term:

Things that can help:

  1. Smart contract runtime and compute costs are currently treated as if they will cost significantly more than they actually do. This significantly limits capacity for the Stacks chain but is fixable. There can be a cost-vote as outlined in SIP-006 to implement potential improvements pre-Stacks 2.0. The cost-vote process would require participation by Stacks holders.

  2. From what I’ve seen, the coming Stacks 2.1 release could help with a ton of improvements. Discussion is linked here.

  3. Subnets or App-chains can be implemented in the next ~6 months that could allow more optionality and features for end-users and developers alike. I’ll share more details on my thoughts about subnets in the coming days, but sharing a quick summary of the idea here to start. As mentioned, the Stacks mainchain is optimizing for decentralization and independent verification. Anyone can independently verify it’s the correct version, and anyone can run a node. This is an intentional design decision as you want average laptop nodes to be able to connect, although a tradeoff is that your effective bandwidth on the mainchain is going to be relatively small. A healthy fee market will emerge for the main chain (which also means higher Bitcoin rewards for Stackers!). The block space on the main chain will always be scarce and expensive (like Bitcoin) but you can scale around it.

Subnets can be thought of as an extension of the core main Stacks chain. Subnets can score lower on the decentralization side but once you score lower on decentralization you can score very high on the speed of transactions with low costs. And if you combine that with the main chain, now you’re getting the best of both worlds.

As we think through and implement solutions, it’s also important to keep in mind that the Stacks ecosystem follows an open decentralized process for implementing gradual changes (like Bitcoin) vs. overnight changes dictated by a central authority.

Developers launching their apps and contracts in the coming weeks can:

  1. Use better benchmarking tools to get a sense of costs on the network, and set expectations (e.g., better default gas fees) with their users.
  2. Can monitor the cost voting process to see if launching after better cost estimates go live makes more sense. Cost voting does not need to wait for Stacks 2.1.
  3. Follow the subnets proposal and upcoming development to start thinking about how their app can evolve in the 6 months+ window with solutions like subnets or app chains.

Thanks for taking the time to read and I look forward to what you all have to add to this!

– Muneeb | Stacks founder


Thanks, Muneeb.

I posted an update at Mempool congestion on Stacks: observations and next steps from Hiro - #10 by diwaker, cross-posting relevant bits here for context:

First, quick summary of the work items mentioned earlier in this thread:

  • For miners: was tagged 6 days ago. A release build is under review, I’m optimistic that we’ll have a release available shortly. Worth repeating though that it’s ultimately up to the miners to upgrade their software.
  • RBF support in Hiro Wallet / Connect etc: Covered in ample detail by Mark earlier in this thread
  • For exchanges: Or anyone else using the send-many contract + CLI, v1.3.0 of the CLI now supports a fee multiplier. It already supported specifying a nonce, and together this enables one to RBF their existing send-many transactions.

Obviously performance and reliability at the blockchain layer directly affects developer satisfaction and so I want to take a moment to talk about what Hiro would be focusing on. We can think of possible improvements / changes / new capabilities on the Stacks blockchain in the short (4-6 weeks), medium (1-2 quarters) and long-term (6-12 months).

In the short-term, Hiro is focusing our efforts on three workstreams:

  • Facilitate a healthy fee market: As I mentioned earlier, the lack of a robust and dynamic fee market on Stacks makes the chain less resilient. An expensive but popular contract call can make the entire network crawl. We have already started implementing our proposal for improving cost estimation and mempool processing. While this won’t fundamentally change the underlying blockchain capacity, it will allow for more natural throttling and traffic control to emerge via transaction fees.
  • Profile, Benchmark, Optimize: We’d like to dramatically improve the visibility into the current performance of the Stacks blockchain – exactly where time is being spent (e.g. block assembly, block propagation, block assembly, MARF I/O etc). We’re also going to run some rigorous benchmarks to understand the limits of the current implementation under different workloads (e.g. how many contract-calls for a specific contract/call can be packaged in a block). This profiling and benchmarking will shed light on the biggest bottlenecks in the current codebase which we can then work on improving. We’re also working on integrating some of this benchmarking within Clarinet, so smart-contract developers have more visibility and insight into expected costs and performance of their Clarity contracts.
  • Explore feasibility of Clarity cost-voting as a non consensus-breaking solution: Changing block limits and runtime costs would normally be a consensus breaking change (necessitating a hard-fork). Clarity supports in-situ upgrades, which allows changing runtime costs through an on-chain vote. Hiro is currently validating this on the testnet and exploring the general feasibility of this approach.

There’s also amazing ongoing discussion around topics like nonces, block capacity, performance, microblocks etc on the community Discord server. Join us there!

1 Like

I have some general scalability questions that maybe Hiro or enlightened community members could help with answering. I apologize if this is not the right communication channel, it seemed @muneeb would prefer longer questions on the forum as opposed to discord…

My understanding is that decentralization is a spectrum and the result of sometimes subjective design decisions. One of the key value propositions I took away from the Stacks whitepaper is that since the history of Stacks transactions are ultimately hashed into a bitcoin transaction, it can allow Stacks to scale independently as a network while delegating decentralized settlement to Bitcoin. In what ways is this unique design characteristic being leveraged to scale Stacks? It seems that Stacks has chosen to pursue a multitude of decentralized design decisions but in doing so inherits all of the scalability problems therein (which I would argue scalability is more important than decentralization for a smart contract platform when it comes to attracting network growth). Should Bitcoin settlement allow for Stacks to scale in a more centralized manner since we can trust the decentralization and immutability of Bitcoin’s proof of work? My understanding is one of the biggest issues with centralized chain solutions is that they run the risk of cheap chain reorganizations but if Stacks settles on Bitcoin then chain history/reorg is not an issue? What exactly is the benefit of Stacks’ “Bitcoin Settlement” if we still have to pursue all kinds of decentralized approaches that harm our scalability? I know i’m perhaps getting multiple concepts confused and wires crossed here so would appreciate input and clarification, thank you! :slight_smile:

1 Like

You’re hitting a good topic. Bitcoin does not have full smart contracts, Stacks does. The Stacks chain/layer is designed to optimize for decentralization itself (like Bitcoin). Building smart contracts for Bitcoin in a more centralized fashion would defeat the purpose e.g., you could just be a federation of high-powered nodes (like Liquid) but then it’s not an open permission-less system.

Subnets solve for what you are bringing up. Now that you have Bitcoin base layer (decentralized), Stacks smart contract layer (decentralized), then you can allow different subnets to pick their own tradeoffs e.g., a subnet can require all miners to be high-powered data center nodes (vs just average laptops). Or a subnet can do check-pointing on Bitcoin (vs having a Nakamoto-style consensus).

In summary, I think what you are thinking makes more sense (at least to me) at the subnet level and not at the Stacks main chain level: you want the smart contract layer to be decentralized and independently verifiable itself and be able to route around failures (like Bitcoin can).

1 Like

Just throwing this out there – this process lets you vote on how expensive (or cheap!) it is to do contract-call?s, which are among the most expensive Clarity operations right now.