Stacks block and tenure dimensions: Expectations and discussion

It was really nice to see the Stacks 3.0 upgrade go live earlier today!

I wanted to open up a discussion around: congestion in Stacks. How tenures factor into this, and how the Nakamoto upgrade (Stacks 3.0) and the fast blocks advertised with it have shaped user expectations.
Let’s dive into a few key points. I believe the core teams has given much more thought to this already and a lot of optimizations are already planned, I would love to see more specific info shared with the public, I am hoping this thread can help kick that off.

Current Block Dimensions and User Expectations

As some of you have noticed, congestion can occur during periods of high demand. Transactions with low fee can remain pending for a long time. We’ve confirmed that blocks can fill up quickly within a tenure, which then requires us to wait for the next Bitcoin block. Users have expected that with faster block arrivals in Stacks 3.0 (every 5-10 seconds), these blocks would also have the same maximum size as in Stacks 2.1. However, the tenure dimensions remain similar to previous versions, despite these faster arrivals. A tenure typically changes with a new bitcoin block.

This consistency in block dimensions is intentional. Although block arrival times have changed, we’re still working within a space where we must balance blockchain size growth with ensuring enough room for protocols and high-priority transactions.

Block Dimension Flexibility and Finding the Sweet Spot

The block dimensions in Stacks are somewhat flexible, and the design philosophy has always been to find a “sweet spot”—a balance between reasonable size growth and sufficient transaction capacity. Right now, the dimensions allow protocols to operate without overloading the blockchain’s size. Here’s a breakdown of block dimensions from Stacks 2.1, which serve as the basis for Stacks 3.0:

Block Limit Read-Only Limit
Runtime 5000000000 1000000000
Read count 15000 30
Read length (bytes) 100000000 100000
Write count 15000 0
Write length (bytes) 15000000 0

If a block hits the limit on any one of these dimensions, it’s considered full. That means in times of high demand, only smaller transactions (such as STX transfers) may fit into the remaining space for a tenure.

Room for Expansion and Technical Limits

This brings up a crucial question: How much room do we have to increase block size if congestion worsens?, and do we want to?
Technically, the block dimensions are changeable. However, altering these dimensions would need careful consideration to keep Stacks accessible and manageable for those running nodes or miners. There’s also the question of whether changing block dimensions would be a consensus-breaking change—this is something to consider if adjustments are proposed.

When I spoke with Jesse recently, he estimated that transactions would need to be around 10 times the current volume before the existing dimensions would become a bottleneck. That said, a well-functioning blockchain does need a functioning fee market, and block space will always be limited by design. The idea is to allow for high-priority transactions to find room, creating a natural market in times of high demand.
Miners are not yet rationing space it seems to optimize their profits as suggested here: test: testnet with heavy mempool activity · Issue #4538 · stacks-network/stacks-core · GitHub

Next Steps: Tenure Extensions and Block Size Considerations

A planned, non-consensus-breaking change is extending tenure duration when Bitcoin blocks are delayed by more than 10 minutes. This change, which has already been agreed upon to follow the Nakamoto Upgrade, would potentially offer more space when it’s needed most. This kind of “reset” to block space could address some of the waiting issues without increasing block dimensions directly, and I’d like to highlight the importance of prioritizing this now.

Let’s open up the discussion:
1 What else of this is part of the optimization roadmap already planned?
2 What flexibility is there when it comes to increasing the block dimensions or tenure dimensions?
3 What technical or economic factors should we consider to maintain a healthy fee market?
4 How do we balance enough block space for protocols building on Stacks with keeping the blockchain accessible?

  • Should teams with protocols’ (i.e. ALEX) exceptions of user growth, and expected transactions per user factor in somehow if block dimensions are changed?

5 what are the alternatives for scaling without increasing block dimensions?

  • Alternatively should we stimulate the use of subnets (Bitcoin L3) rather than changing block dimensions on Stacks (bitcoin L2)
  • Clarity WASM upgrade, how will that influence existing protocols, does it require new contract deployments or will existing protocols benefit from this automatically?
  • What can protocols do to decrease need for “block space”?

image

6 Likes

Voting on tenure length is another possible optimization, I saw Friedger mention that. I always assumed it would be 10 minutes. And I think protocols also rely on that for timing certain actions such as the length of staking cycles on ALEX. Perhaps there are better options.

Stacks Foundation tweet about Upcoming Optimizations:
https://x.com/stacksorg/status/1851304384715788297?s=61&t=B7TLNfyjJSwJ1gK4mgPe7w

2 Likes

Another relevant thread on Github: Block budget usage in Nakamoto · Issue #5398 · stacks-network/stacks-core · GitHub

Some of the topics in it:

  • Improved mempool walking
  • Make signers enforce pacing the budget usage
  • Updating Cost Limits
  • Changing mining heuristics to pace out the block budget, without singer restrictions
1 Like

Been thinking about blockspace constraints in Stacks. Spreading TXs across the 10-min tenure seems like a workaround, but what’s actually preventing continuous processing, especially for DeFi?

Is there something fundamental about these limitations beyond the Bitcoin anchor? Or are we working with inherited design choices that could be revisited?

Curious what’s really at stake here.

The limits ensure that the blockchain remains decentralised and nodes can be operated with a reasonable effort. Currently, it costs maybe 250$/month to run a stacks node + the api. What cost would be acceptable in 2050?

With clarity wasm, we can revisit the costs of the contract calls.

With stacks 3.0, we have all tools to increase the throughput and extend the tenure budget more often. However, I think that this is consensus breaking even though no code is changed. Miners and signers (who are backed by stackers) need to find consensus about max tenure length.

We can also build better tools to estimate fees and visualise where my tx is in the mempool and why.

Furthermore, smaller tx work better than larger. Minting 1 or 2 nakapack nfts can be confirmed later in the tenure when there is still some budget left.

Improving tools for miners as mentioned above should be also on the list. If we can show miners the fees that could have been earned…

I don’t think tenure extensions impact any protocols because tenures are usually not used for timing. Protocols could use the bitcoin block height or stacks block height as in Stacks 2.

6 Likes

:thread: Let’s break down the recent Stacks 3.0 upgrade and what it means for the ecosystem (1/13)

The Nakamoto release (Stacks 3.0) just went live with faster blocks, but there’s a lot more happening under the hood. Here’s what you need to know :point_down:

  1. The Speed Change
    • Blocks now arrive every 5-10 seconds (vs. previous ~10 minutes)
    • Faster block times = more frequent transaction processing
    • BUT total capacity per tenure (Bitcoin block) remains similar

  2. Think of it like a parking garage:
    • Gates open more frequently
    • But total parking spaces haven’t increased
    • Once full, you wait for next Bitcoin block
    • This creates an interesting dynamic for transaction fees

  3. Current Block Limits:
    • Runtime caps
    • Read/Write operation limits
    • Data size restrictions
    These limits keep the blockchain manageable and decentralized. Running a node costs ~$250/month - keeping this affordable is crucial.

  4. The team is exploring several optimization paths:
    • Tenure extensions when Bitcoin blocks are delayed
    • Improved miner tools for space management
    • Better fee market mechanics
    • Subnet scaling solutions (Bitcoin L3)

  5. The WASM Upgrade :fire:
    This is a game-changer coming to Stacks:
    • Makes contracts run more efficiently
    • Reduces computational costs
    • Enables more languages
    • Better developer experience

  6. Think of WASM like installing a new, efficient engine:
    • Same functionality
    • Less resource usage
    • Better performance
    • But requires some retooling for existing apps

  7. The Balancing Act:
    Teams are carefully weighing:
    • Transaction capacity vs. node costs
    • Speed vs. security
    • Growth vs. sustainability
    These aren’t easy trade-offs!

  8. What’s Next?
    • Tenure extension implementations
    • WASM integration
    • Enhanced miner tooling
    • Improved fee market dynamics

  9. For Developers:
    • Consider subnet solutions for scaling
    • Watch for WASM upgrade opportunities
    • Optimize contract efficiency
    • Plan for potential contract redeployments

  10. For Users:
    • Faster blocks are live
    • Fee market is evolving
    • More optimizations coming
    • Better tools for fee estimation ahead

  11. The Big Picture:
    Stacks is evolving while maintaining its core promise:
    • Bitcoin’s security
    • Scalable smart contracts
    • Sustainable growth
    • Decentralized accessibility

  12. Want to get involved?
    • Run a node
    • Join development discussions
    • Test new features
    • Provide feedback on Github

  13. Follow for more updates as Stacks continues to evolve! This is just the beginning of a more scalable, efficient, and developer-friendly Bitcoin L2.

End :thread:

Remember to follow and retweet if you found this helpful! #Stacks #Bitcoin #Web3 #BlockchainDev

2 Likes

There seems to be room for improvement in the current situation without changing the cost of operating nodes. I’ve shared on the GitHub issue, but I plotted some early data on block dimensions usage, and it looks like reads are by far the biggest bottleneck. The visualization is here: Stacks Space Usage / vini.btc | Observable, and the code used to gather data is here: GitHub - vini-btc/stacks-quick-block-space-usage.

I also found this issue in which Aaron hints that the current budget for reads could be too pessimistic if benchmarks are correct: Chainstate DB performance · stacks-network/stacks-core · Discussion #3777 · GitHub. Suppose this is true, and I’m not missing anything. In that case, we should be able to increase the read budget without compromising on the hardware requirements. This would help not only get more transactions in sooner but even incentivise mining more (+ fee without significantly increasing running costs).

But I’m mostly speculating. Curious to hear from core as soon as possible.

2 Likes

Why is keeping it affordable to run a node critical? Solana nodes are not cheap to run and they seem to be doing fine.

2 Likes

Bitcoin block production time is uneven, not constant speed 10 min, sometimes 1 hours a block, sometimes 1 second a block, but the total tenture blocks size of Stacks is limited to 2mb, to get the max txs fees , miner who wins this tenture trends to fill full the first several blocks with txs to avoid rapid bitcoin block. That is the way i understand how congestion happened.

So the simplest strategy to solve this issue is to increase the total tenture blocks size into a big size such as 1GB and set small size limit (200-500k) to Stacks micro fast blocks. This could address throughout issue and uneven Stacks block size issue at the same time. I think it may work.

2 Likes

I am very much in favor of suggestions bigger tenure blocks size,cause right now even after Nakamoto upgrade TPS of Stacks(86818/86、86819/81tx) is way lower than TPS of Bitcoin(868618/5991tx、868619/4878tx),
how can we persuade people Stacks Network is an excellent and qualified BTC L2?

Hi @friedger tenure extensions will impact any protocols using block-height prior to Stacks 3.0, because it now equates to tenure-height under Stacks 3.0. For example, ALEX staking/farming cycle will be affected.

My understanding is that the protocols like ALEX are not affected. Tenure extension is only about what can happen between two bitcoin blocks.

1 Like

Right, a tenure extension will NOT increment the tenure height. Only a tenure change will do that, and that will always be paced by Bitcoin blocks.

3 Likes

I’d like to push back on the $250/month number that is being mentioned here. Even that number sounds very high to me for decentralization concerns. If you are just running a node for yourself, you can still do so on an old computer or a Raspberry Pi at home and have $0 monthly fees.

I’m guessing the $250/month is coming from running a scalable service that relies on your node?

3 Likes

I agree, $250/month is very, very expensive for a lot of people, including some prominent and long-time supporters.

People should be able to run nodes on computers they currently own, with the ISPs they currently use. Having signers decide tenure budgets through tenure-extend transactions on-the-fly gets us out of a block-size war. The median tenure’s compute budget can grow as the median user’s computers and ISPs get better.

2 Likes

There seems to be a lot of discussion here but without focusing on the key area.

Increasing network usage.

Nakamoto hasn’t brought increased network adoption. Why?

I’d argue it can be summarized by:

  • Expensive transactions
  • Lack of common tools and entry rails
  • Speed

Stacks is significantly more expensive than other networks. Add to that, you can’t use Metamask and other common bridges/stable coins to get into Stacks, there’s a major barrier to entry. Couple that with a lack of speed and users aren’t coming into Stacks.

So the discussion about running a Stacks node for $250 seems to be missing the point. If those users are so important, create a light client for them.

The two biggest quality of life improvements are:

  • Increase the Read Count limit. This is the biggest contributor to block sizes filling as seen in the below chart. This is kinda crazy. That you’re preventing blocks from accepting more transactions because of reads.
  • Build a Metamask Snap

image

If you really want to do this correctly then the community should align on the Static Cost analysis Static cost analysis · Issue #5360 · stacks-network/stacks-core · GitHub

How trivial is it to increase read counts? If it’s just something to be voted on, I’d argue do it immediately and then get a working group working on a Metamask Snap so we can get all the EVM users onboard

1 Like

A lot of the discussion shifted to this issue on Github instead. I will do my best to write an abstract

It addresses more avenues of improving the current situation

Shortlist of solutions

Updating Cost Functions and Hardfork
Muneeb-ali and jcnelson suggest updating the cost functions, which would require a hardfork. They propose bundling this change with an upcoming emissions-related hardfork to minimize disruptions.
Increased Frequency for Tenure Extensions
Muneeb-ali and obycode propose allowing more frequent tenure extensions to alleviate budget constraints. This would let nodes reset budgets faster and use resources more flexibly.
Flexible Budget Resets
Owenstrevor discusses the idea of resetting compute budgets more flexibly. However, the flexibility should be limited to prevent abuse, with resets tied to specific conditions or intervals.
Gradual Budget Decay Mechanism
Jude Nelson suggests a decay mechanism for unused block budgets, allowing them to decrease gradually over time. This would avoid abrupt resets and prevent hoarding of resources.

Key takeaways
Priority Alignment
There is consensus among key contributors (Muneeb Ali, Jude Nelson, Brice) to update cost functions and align these changes with the planned hardfork. This approach reduces the need for multiple disruptions.
Incremental Improvements
Softer solutions like increasing tenure extensions and implementing a decay mechanism can be done incrementally. These changes are seen as less risky and can be adjusted over time.
Balancing Flexibility and Stability
Flexible budget resets have potential benefits but must be controlled to prevent abuse. Owenstrevor emphasizes the need for limits to ensure stability.

And I thought this post was very insightful about how Nakamoto differs from Stacks 2.x and other blockchains because it decouples the notion of resource consumption from the notion of blocks.
https://github.com/stacks-network/stacks-core/issues/5398#issuecomment-2463725827

That means signers could even enforce transaction expiration rules. I will explain why that is useful in my next post.

Fee Estimation and Underutilized Fees

Improving the fee estimator can significantly help in getting important transactions included in blocks more efficiently. Currently, even when there is no congestion, the fees suggested by the API remain unnecessarily high—a point some users have noticed. Substantially lowering the fee—by 10x or even 100x—may still get your transactions included within Nakamoto times of 5-30 seconds, especially for small transactions like transferring STX or a BNSv2 name.

How Transaction Expiration Time Can Help Battle Congestion

The current default expiration time for all transactions is 256 tenures (Bitcoin blocks), which is about two days. This duration is how long a transaction remains in the mempool after being broadcast.

If I want to send a transaction as cheaply as possible, this long expiration time is helpful. I can send it with the lowest acceptable fee and hope that during a period of less congestion within those two days, it will be included in a block.

Transactions Critical to Be Included Within 5 Seconds to 1 Minute

However, when my transaction is time-sensitive—for example, a swap aiming to exploit a temporary imbalance between two pools (an arbitrage opportunity)—I have no interest in the transaction if it’s not picked up within 10 seconds or, at most, one tenure. After that, another trader or bot will likely have seized the opportunity. Any fees paid after this point are wasted because the transaction will fail due to price changes.

To ensure that 5-second confirmation times are possible, miners need to reserve some space for urgent transactions. Having transactions that expire quickly incentivizes miners to act promptly. If miners know they can’t mine a transaction after a short expiration time (like 10 minutes or even 30 seconds), they are more inclined to include high-fee transactions quickly, rather than postponing them over the next two days when they may fail due to price fluctuations.

Additionally, if nodes can drop transactions faster, there will be less clutter in the mempool, improving miner performance and fee estimations. While long expiration times have their place, the one-size-fits-all approach of 256 blocks may not be optimal for Stacks.

Nakamoto and expiration times

Couple comments from Friedger (Nov 7th) who I briefly spoke to about expiration times for transactions

  1. The expiration time for transaction could be part of the mempool, now that we have timestamps on stacks blocks.
  2. Could we use attachments for that?
    Could it be done without a hard fork?
  3. With signers everything is possible
  4. Signers can reject blocks that contain a tx that was marked as expired.

Without requiring a hard fork, nodes would still retain transactions that are expired from the signer’s perspective (e.g., after 30 seconds) but not yet expired according to the node’s default of two days. While this isn’t ideal, the ease of experimenting without a hard fork could lead to valuable short-term improvements.

Enforcing a 5-second expiration time based on Stacks block timestamps may not be practical due to clock precision limitations. More feasible minimal expiration times might range from 30 seconds to 10 minutes, or align with a single tenure.

I like the idea of self-expiring transactions. I’m not sure about the idea of implementing it completely as a signer-enforced rule though. We would need some change in the transaction structure in order to add this timeout, so changes would definitely need to be made across a variety of places and it seems to me that it would likely require a hard fork due to this change.

Would there be some opportunity for DoS attacks if this was implemented? For example, currently, an account can submit at most 25 transactions to the chain before they start to get rejected from the mempool for nonce-chaining. Any of those 25 transactions can be replaced with a new transaction, but that requires increasing the fee, so there is an ever-increasing cost to the attacker. If the attacker could instead send transactions with low fees and a low expiration time, then they could get many more transactions accepted into the mempool without ever paying more in fees.

1 Like

The two options that users have now that are alternatives to this expiring transaction idea are:

  1. Add a block height or block time check in the contract to exit if it is too late
  2. RBF the transaction with another transaction when you no longer want it to execute

1 requires you to still pay the fee for the transaction, since it was executed. 2 requires you to increase your fee.

2 Likes