Stacks block and tenure dimensions: Expectations and discussion

The limits ensure that the blockchain remains decentralised and nodes can be operated with a reasonable effort. Currently, it costs maybe 250$/month to run a stacks node + the api. What cost would be acceptable in 2050?

With clarity wasm, we can revisit the costs of the contract calls.

With stacks 3.0, we have all tools to increase the throughput and extend the tenure budget more often. However, I think that this is consensus breaking even though no code is changed. Miners and signers (who are backed by stackers) need to find consensus about max tenure length.

We can also build better tools to estimate fees and visualise where my tx is in the mempool and why.

Furthermore, smaller tx work better than larger. Minting 1 or 2 nakapack nfts can be confirmed later in the tenure when there is still some budget left.

Improving tools for miners as mentioned above should be also on the list. If we can show miners the fees that could have been earned…

I don’t think tenure extensions impact any protocols because tenures are usually not used for timing. Protocols could use the bitcoin block height or stacks block height as in Stacks 2.

7 Likes

:thread: Let’s break down the recent Stacks 3.0 upgrade and what it means for the ecosystem (1/13)

The Nakamoto release (Stacks 3.0) just went live with faster blocks, but there’s a lot more happening under the hood. Here’s what you need to know :point_down:

  1. The Speed Change
    • Blocks now arrive every 5-10 seconds (vs. previous ~10 minutes)
    • Faster block times = more frequent transaction processing
    • BUT total capacity per tenure (Bitcoin block) remains similar

  2. Think of it like a parking garage:
    • Gates open more frequently
    • But total parking spaces haven’t increased
    • Once full, you wait for next Bitcoin block
    • This creates an interesting dynamic for transaction fees

  3. Current Block Limits:
    • Runtime caps
    • Read/Write operation limits
    • Data size restrictions
    These limits keep the blockchain manageable and decentralized. Running a node costs ~$250/month - keeping this affordable is crucial.

  4. The team is exploring several optimization paths:
    • Tenure extensions when Bitcoin blocks are delayed
    • Improved miner tools for space management
    • Better fee market mechanics
    • Subnet scaling solutions (Bitcoin L3)

  5. The WASM Upgrade :fire:
    This is a game-changer coming to Stacks:
    • Makes contracts run more efficiently
    • Reduces computational costs
    • Enables more languages
    • Better developer experience

  6. Think of WASM like installing a new, efficient engine:
    • Same functionality
    • Less resource usage
    • Better performance
    • But requires some retooling for existing apps

  7. The Balancing Act:
    Teams are carefully weighing:
    • Transaction capacity vs. node costs
    • Speed vs. security
    • Growth vs. sustainability
    These aren’t easy trade-offs!

  8. What’s Next?
    • Tenure extension implementations
    • WASM integration
    • Enhanced miner tooling
    • Improved fee market dynamics

  9. For Developers:
    • Consider subnet solutions for scaling
    • Watch for WASM upgrade opportunities
    • Optimize contract efficiency
    • Plan for potential contract redeployments

  10. For Users:
    • Faster blocks are live
    • Fee market is evolving
    • More optimizations coming
    • Better tools for fee estimation ahead

  11. The Big Picture:
    Stacks is evolving while maintaining its core promise:
    • Bitcoin’s security
    • Scalable smart contracts
    • Sustainable growth
    • Decentralized accessibility

  12. Want to get involved?
    • Run a node
    • Join development discussions
    • Test new features
    • Provide feedback on Github

  13. Follow for more updates as Stacks continues to evolve! This is just the beginning of a more scalable, efficient, and developer-friendly Bitcoin L2.

End :thread:

Remember to follow and retweet if you found this helpful! #Stacks #Bitcoin #Web3 #BlockchainDev

3 Likes

There seems to be room for improvement in the current situation without changing the cost of operating nodes. I’ve shared on the GitHub issue, but I plotted some early data on block dimensions usage, and it looks like reads are by far the biggest bottleneck. The visualization is here: Stacks Space Usage / vini.btc | Observable, and the code used to gather data is here: GitHub - vini-btc/stacks-quick-block-space-usage.

I also found this issue in which Aaron hints that the current budget for reads could be too pessimistic if benchmarks are correct: Chainstate DB performance · stacks-network/stacks-core · Discussion #3777 · GitHub. Suppose this is true, and I’m not missing anything. In that case, we should be able to increase the read budget without compromising on the hardware requirements. This would help not only get more transactions in sooner but even incentivise mining more (+ fee without significantly increasing running costs).

But I’m mostly speculating. Curious to hear from core as soon as possible.

2 Likes

Why is keeping it affordable to run a node critical? Solana nodes are not cheap to run and they seem to be doing fine.

2 Likes

Bitcoin block production time is uneven, not constant speed 10 min, sometimes 1 hours a block, sometimes 1 second a block, but the total tenture blocks size of Stacks is limited to 2mb, to get the max txs fees , miner who wins this tenture trends to fill full the first several blocks with txs to avoid rapid bitcoin block. That is the way i understand how congestion happened.

So the simplest strategy to solve this issue is to increase the total tenture blocks size into a big size such as 1GB and set small size limit (200-500k) to Stacks micro fast blocks. This could address throughout issue and uneven Stacks block size issue at the same time. I think it may work.

2 Likes

I am very much in favor of suggestions bigger tenure blocks size,cause right now even after Nakamoto upgrade TPS of Stacks(86818/86、86819/81tx) is way lower than TPS of Bitcoin(868618/5991tx、868619/4878tx),
how can we persuade people Stacks Network is an excellent and qualified BTC L2?

Hi @friedger tenure extensions will impact any protocols using block-height prior to Stacks 3.0, because it now equates to tenure-height under Stacks 3.0. For example, ALEX staking/farming cycle will be affected.

My understanding is that the protocols like ALEX are not affected. Tenure extension is only about what can happen between two bitcoin blocks.

2 Likes

Right, a tenure extension will NOT increment the tenure height. Only a tenure change will do that, and that will always be paced by Bitcoin blocks.

3 Likes

I’d like to push back on the $250/month number that is being mentioned here. Even that number sounds very high to me for decentralization concerns. If you are just running a node for yourself, you can still do so on an old computer or a Raspberry Pi at home and have $0 monthly fees.

I’m guessing the $250/month is coming from running a scalable service that relies on your node?

3 Likes

I agree, $250/month is very, very expensive for a lot of people, including some prominent and long-time supporters.

People should be able to run nodes on computers they currently own, with the ISPs they currently use. Having signers decide tenure budgets through tenure-extend transactions on-the-fly gets us out of a block-size war. The median tenure’s compute budget can grow as the median user’s computers and ISPs get better.

3 Likes

There seems to be a lot of discussion here but without focusing on the key area.

Increasing network usage.

Nakamoto hasn’t brought increased network adoption. Why?

I’d argue it can be summarized by:

  • Expensive transactions
  • Lack of common tools and entry rails
  • Speed

Stacks is significantly more expensive than other networks. Add to that, you can’t use Metamask and other common bridges/stable coins to get into Stacks, there’s a major barrier to entry. Couple that with a lack of speed and users aren’t coming into Stacks.

So the discussion about running a Stacks node for $250 seems to be missing the point. If those users are so important, create a light client for them.

The two biggest quality of life improvements are:

  • Increase the Read Count limit. This is the biggest contributor to block sizes filling as seen in the below chart. This is kinda crazy. That you’re preventing blocks from accepting more transactions because of reads.
  • Build a Metamask Snap

image

If you really want to do this correctly then the community should align on the Static Cost analysis Static cost analysis · Issue #5360 · stacks-network/stacks-core · GitHub

How trivial is it to increase read counts? If it’s just something to be voted on, I’d argue do it immediately and then get a working group working on a Metamask Snap so we can get all the EVM users onboard

2 Likes

A lot of the discussion shifted to this issue on Github instead. I will do my best to write an abstract

It addresses more avenues of improving the current situation

Shortlist of solutions

Updating Cost Functions and Hardfork
Muneeb-ali and jcnelson suggest updating the cost functions, which would require a hardfork. They propose bundling this change with an upcoming emissions-related hardfork to minimize disruptions.
Increased Frequency for Tenure Extensions
Muneeb-ali and obycode propose allowing more frequent tenure extensions to alleviate budget constraints. This would let nodes reset budgets faster and use resources more flexibly.
Flexible Budget Resets
Owenstrevor discusses the idea of resetting compute budgets more flexibly. However, the flexibility should be limited to prevent abuse, with resets tied to specific conditions or intervals.
Gradual Budget Decay Mechanism
Jude Nelson suggests a decay mechanism for unused block budgets, allowing them to decrease gradually over time. This would avoid abrupt resets and prevent hoarding of resources.

Key takeaways
Priority Alignment
There is consensus among key contributors (Muneeb Ali, Jude Nelson, Brice) to update cost functions and align these changes with the planned hardfork. This approach reduces the need for multiple disruptions.
Incremental Improvements
Softer solutions like increasing tenure extensions and implementing a decay mechanism can be done incrementally. These changes are seen as less risky and can be adjusted over time.
Balancing Flexibility and Stability
Flexible budget resets have potential benefits but must be controlled to prevent abuse. Owenstrevor emphasizes the need for limits to ensure stability.

And I thought this post was very insightful about how Nakamoto differs from Stacks 2.x and other blockchains because it decouples the notion of resource consumption from the notion of blocks.
https://github.com/stacks-network/stacks-core/issues/5398#issuecomment-2463725827

That means signers could even enforce transaction expiration rules. I will explain why that is useful in my next post.

Fee Estimation and Underutilized Fees

Improving the fee estimator can significantly help in getting important transactions included in blocks more efficiently. Currently, even when there is no congestion, the fees suggested by the API remain unnecessarily high—a point some users have noticed. Substantially lowering the fee—by 10x or even 100x—may still get your transactions included within Nakamoto times of 5-30 seconds, especially for small transactions like transferring STX or a BNSv2 name.

How Transaction Expiration Time Can Help Battle Congestion

The current default expiration time for all transactions is 256 tenures (Bitcoin blocks), which is about two days. This duration is how long a transaction remains in the mempool after being broadcast.

If I want to send a transaction as cheaply as possible, this long expiration time is helpful. I can send it with the lowest acceptable fee and hope that during a period of less congestion within those two days, it will be included in a block.

Transactions Critical to Be Included Within 5 Seconds to 1 Minute

However, when my transaction is time-sensitive—for example, a swap aiming to exploit a temporary imbalance between two pools (an arbitrage opportunity)—I have no interest in the transaction if it’s not picked up within 10 seconds or, at most, one tenure. After that, another trader or bot will likely have seized the opportunity. Any fees paid after this point are wasted because the transaction will fail due to price changes.

To ensure that 5-second confirmation times are possible, miners need to reserve some space for urgent transactions. Having transactions that expire quickly incentivizes miners to act promptly. If miners know they can’t mine a transaction after a short expiration time (like 10 minutes or even 30 seconds), they are more inclined to include high-fee transactions quickly, rather than postponing them over the next two days when they may fail due to price fluctuations.

Additionally, if nodes can drop transactions faster, there will be less clutter in the mempool, improving miner performance and fee estimations. While long expiration times have their place, the one-size-fits-all approach of 256 blocks may not be optimal for Stacks.

Nakamoto and expiration times

Couple comments from Friedger (Nov 7th) who I briefly spoke to about expiration times for transactions

  1. The expiration time for transaction could be part of the mempool, now that we have timestamps on stacks blocks.
  2. Could we use attachments for that?
    Could it be done without a hard fork?
  3. With signers everything is possible
  4. Signers can reject blocks that contain a tx that was marked as expired.

Without requiring a hard fork, nodes would still retain transactions that are expired from the signer’s perspective (e.g., after 30 seconds) but not yet expired according to the node’s default of two days. While this isn’t ideal, the ease of experimenting without a hard fork could lead to valuable short-term improvements.

Enforcing a 5-second expiration time based on Stacks block timestamps may not be practical due to clock precision limitations. More feasible minimal expiration times might range from 30 seconds to 10 minutes, or align with a single tenure.

I like the idea of self-expiring transactions. I’m not sure about the idea of implementing it completely as a signer-enforced rule though. We would need some change in the transaction structure in order to add this timeout, so changes would definitely need to be made across a variety of places and it seems to me that it would likely require a hard fork due to this change.

Would there be some opportunity for DoS attacks if this was implemented? For example, currently, an account can submit at most 25 transactions to the chain before they start to get rejected from the mempool for nonce-chaining. Any of those 25 transactions can be replaced with a new transaction, but that requires increasing the fee, so there is an ever-increasing cost to the attacker. If the attacker could instead send transactions with low fees and a low expiration time, then they could get many more transactions accepted into the mempool without ever paying more in fees.

1 Like

The two options that users have now that are alternatives to this expiring transaction idea are:

  1. Add a block height or block time check in the contract to exit if it is too late
  2. RBF the transaction with another transaction when you no longer want it to execute

1 requires you to still pay the fee for the transaction, since it was executed. 2 requires you to increase your fee.

2 Likes
  1. would also require you to have a use for a second transaction, within the same time.
    It doesn’t contribute to miners getting incentivized to get specific transactions in blocks quickly. When you RBF (replace by fee) a transaction it “resets” the default 256 expiration time because it is a new transaction, so you could argue it does the opposite (you would give the miner even more time to process the transaction).

My points of why $250/month nodes or higher level hardware and tenure-extend time decreasing and tenure budget increasing are reasonable, essential and urgent:

1.The current Stacks nodes requirement is too low which caused Stacks effective TPS(effective TPS is measured by daily transactions including transfer and function call)is so small:

Stacks signer nodes run with 256MB RAM old computer, while ETH validators requirement is at least 32G RAM and 100Mbps bandwidth, and Solana validators requirement is 512G RAM and 1-10Gbps bandwidth which costs more than $3000/month.

Thus Stacks TPS < 0.5, ETH TPS : 12-15, Sol TPS:1200-2000
Solana has more than 100M txs per day and ETH has 1M txs each day, while Stacks is less than 10k.

https://x.com/bitrabbit_btc/status/1884523194897625152

2.Because of PoX, Stacks could never be decentralised like ETH or even not as Solana, Stacks has 5 miners and 43 signers, while Solana has 1400 validators, ETH has more than 12000 physical validators.

Coz Stacks inherit bitcoin finality and security, in the Trilemma, we have high security but sacrifice some decentralisation, thus we are ought to improve scalability and performance as much as possible.

3.In 2050, after 6 times bitcoin halve, bitcoin block reward has declined to 0.049 BTC, to maintain current proportional security, bitcoin transaction fees have to exceed 3 BTC/block or more.

We estimate BTC price in 2050 is $1M(it’s a fairly conservative assumption), which means Stacks miners single transaction fees of PoX on each bitcoin block will exceed $500, and the $500 fee doesn’t include the bidding BTC amount to Stacks signers, so every single Stacks miner will cost more than 500•144•30=$2.16M fees every month in 2050.

By contrast $250/month hardware is negligible.

4.The limited network capacity has caused many negative effects: All the applications on Stacks were dragged down by low network capacity, Stacks connot attract new users who get used to the high speed chains like Solana and Sui, and old users keep leaving, also without high throughput, DEFI activities based on sBTC connot thrive. Without high network capacity, all the good tendency like memecoin wave and even BTCfi of sBTC were broken on halfway. Also inactive on-chain activities and low market cap has resulted in emission plan changing and sBTC supply increasing plan carry-forward, it’s backward flywheel.

5.Once we achieve high throughput by increasing tenure budget and hardware requirements, we could enter the forward flywheel cycle:

•performance improving---->
•on-chain activities blooming and more active uers coming from Solana and ETH ---->
•STX market cap rising —>
•Stacks miners and signers amounts increasing —>
•sBTC supply increasing —>
•BTCfi on Stacks based on sBTC thriving —>
•STX market cap rising —> next circle

6.There are multiple teams exploring different bitcoin L2 solutions, some new L2s may copy PoX and sBTC but implement the same expensive hardware and high block bonus strategy of Solana to reach high throughput, it’s a potential threat to Stacks in the future.If that happens, Stacks may lose its leading position.

ETH’s failure has warned us that uers’ experience always be the first priority, decentralisation is not as important as fast speed and cheap fees for most chains coz the the function of smart contract has been proven to be literal casino, besides bitcoin is the real money and currency, all chains are casinos.

1 Like

about TPS, we still really don’t know which is the real capacity of Nakamoto with tenure extensions… the numbers only says that actually there is a very low activity… I’m sure with this settings we can reach higher TPS, but first we need users to interact with the chain…
About the hardware requirements I agree we could increase a bit, now you can run a node on rasPI, but my PI has better settings then 256MB ram… with current hardware we could easily rise to a min 4GB… a basic VPS server here in europe with 4GB ram, 120GB hd. 1GB speed, cost less then 10€/month and could be affordable by many ppl… 256Mb ram is a 90’s settings… But my final question is: how many users are running a node? if you have a startup or a company you can def afford the costs for better hardware.

@jude @brice not sure if this is the right thread for this but…

Regarding TPS/speed/scaling, another possible modification is moving some common token functionality into the base blockchain versus in smart-contract calls. Reading about the so-called (marketed) “smart tokens” made me think of this and pondering why this unusual implementation? And what are the pros and cons of it?..

In Section 5.1, the coreum whitepaper states:

“Smart tokens are natively issued tokens on the Coreum chain that are wrapped around smart contracts. They are highly customizable and are designed to be lightweight and flexible. These tokens exist on the chain’s storage and memory, hence, interacting with them does not require calling smart contract functions.

`The key phrase here is “exist on the chain’s storage and memory”. This suggests that Smart Tokens are not just tokens that can be manipulated by smart contracts, but rather they are tokens that are inherently linked to the chain’s native storage and memory.

In other words, Smart Tokens are not just external assets that are controlled by a smart contract, but rather they are an integral part of the chain’s native architecture. This means that interacting with Smart Tokens does not require calling external smart contract functions, but
rather can be done directly through the chain’s native APIs.`

Customizable: Smart Tokens can be customized with various attributes and features, such as minting, burning, freezing, and whitelisting/blacklisting.

Smart Contract Integration: Smart Tokens can be integrated with smart contracts, allowing for more complex logic and behavior.