Chain State Pruning and `at-block` Proposed Change

Chain State Pruning and at-block Change

Hey everyone!

I’m Alex Huth, a product lead at Stacks Labs.

We are looking for opportunities to drastically improve the performance of the chain and make running and operating nodes and signers more efficient and affordable so we can be ready for massive throughput.

A topic we are investigating chain state growth and size. Today, node and signer operators are feeling the pain of a growing chain state, which is not only getting larger but is accelerating in its growth. Storage costs and hardware requirements are continuing to increase and getting more expensive, and fewer people are able to afford to run nodes. A proposal that we have landed on, like many other blockchains, is allowing users to operate pruned nodes; however, that’s not currently possible today in Stacks.

The big offender preventing this implementation is the at-block function. It is a Clarity function that lets contracts evaluate read-only expressions against the historical state and currently can look back through the entire historical chain state. You pass the block hash and an expression, and it executes as if you were at that block. It has existed since Clarity 1. As a byproduct, nodes must retain the full historical state; every block’s MARF state has to be kept because any contract could reach back to any arbitrary point. This is fundamentally incompatible with pruning.


Known Use Cases and Alternative Patterns

Here are some of the known historical reasons for wanting at-block and some proposed mitigations or alternative patterns:

Voting / Governance Snapshots

The governance contract wanted to know what a user’s token balance was at the block the proposal was created, thus used at-block to check the balance.

Alternative: Record and maintain an explicit snapshot map. Voters prove their balance by referencing the correct checkpoint. This is essentially how OpenZeppelin’s ERC20Votes works; they don’t use historical state reads either, they use explicit checkpoints.

Staking / Reward Accrual

The contract would use at-block to check whether a user was staked at some past block for reward calculations.

Alternative: A Synthetix-style staking rewards model. You never look backwards - you always accumulate forward. Every state-changing action updates the accumulator, and the math works out identically to checking historical state at every block.

Oracle / Price History

The contract needs to look back at recent price data.

Alternative: Store a fixed-size rolling window (ring buffer). You get bounded lookback with constant storage.

Contract Forks and Migration

There are many reasons during a migration or a port where you want to read state from an old contract version.

Alternative: Instead of using at-block to read the old contract’s state at the fork block, do an explicit one-time migration. You read the current state of the old contract - that’s fine because it’s the latest state, not historical, so no at-block is needed. You just have to do the migration before the old contract’s state changes in ways that would invalidate the read.


The insight across all of these: every real use case can be decomposed into one of two categories:

  1. “I need a snapshot of a value at a point in time” - solved by explicit checkpoints.
  2. “I need to compute something over a range of historical values” - solved by accumulators or ring buffers.

While at-block is more elegant and requires less contract-side bookkeeping, it pushes the cost onto every single node operator forever. These alternatives push the cost onto the contracts that actually need the data, which is arguably where it should live.


What We Found

We did a substantial amount of analysis on existing contracts that have been called in the past year that have executed the at-block function: there were only eight that looked back for more than 6 cycles in the past year*, and the vast majority were parts of migrations - not particularly necessary to maintain in the future. A handful of contracts that use it or intend to use it were looking back only a short period of time (1–2 cycles).


The Proposal

We propose limiting at-block to look back six cycles.

Calls targeting blocks older than six cycles would return a structured error; how this is handled in particular is still up for discussion. This would be part of a new Clarity version, so part of a hard fork. As a byproduct, state older than the window can then be safely pruned by nodes. We would still leave the option to run archive nodes for full history so we can run the API against it.

While we think this has massive benefits, there will be some ramifications. Any deployed contracts using at-block with deep lookback will be affected. We want to make sure that this is not a trap - that tokens can’t be stranded in smart contracts if an at-block-using contract is written incorrectly - and catch any other edge cases we haven’t thought of.


What We Want to Hear From You

  • Are you using or intending to use the at-block functionality?
  • Does the six-cycle window work for your use cases? If not, why?
  • Are there use cases we’re missing that need a deeper lookback?
  • Are there any preferences you want to share?

Thank you for your time reading this, your support, and your feedback. We really appreciate it. 2026 is an exciting year. We’ve got lots of fun stuff to show you.

For more insight into what we’re thinking about, check out Stacks Labs CEO Alex Miller’s 2026 Lookahead.

* EDIT: This originally said there were only eight contracts that used at-block; this was incorrect. Hundreds of deployed contracts have at-block. This was our measure of how many had executed the at-block code path looking back more than 6 cycles in the past year.

6 Likes

wow i’m surprised it was this few. i’ve always thought at-block was a nifty feature of Clarity but seems like in actuality not really used often.

1 Like

WHOOPS! Typo. I fixed it - only eight have looked back past 6 tenures in the last year; there are hundreds that use the function

1 Like

A, perhaps more drastic, alternative that we’ve discussed would be to remove the at-block expression altogether. If we can identify alternatives for all of the use cases, this would solve the chainstate size problem and also greatly simplify the stacks-node code. The support for at-block has been a source of several bugs over the years and complicates many different areas of code. Other popular chains do not support this kind of thing, and it doesn’t seem to be a necessary feature. Thoughts?

1 Like

For additional discussion/context from the Feb 27 SIP call (Core devs: Brice & Francesco joined the call), check the recap: Weekly SIP Call #164 – Call Recap | Fri, 27 Feb 2026

2 Likes

Note that there is a SIP proposal for this change, comments are welcome!

2 Likes

Thanks for the update Jw! Will be sharing this SIP on today’s SIP call

Hi, I’m going through the SIP text and had a question out of curiosity (purely for educational purposes).

  1. What exactly is causing the chain to grow so quickly at the current pace (~2.7GB per day as quoted)?

  2. What types of data make up most of that growth?

Was the Nakamoto Upgrade the main turning point that increased daily storage growth due to ~5-second blocks? Or were there other major events or upgrades that significantly influenced it?

If Nakamoto was the key factor, what was the approximate daily storage growth rate before it?

Just trying to better understand how the network works, not raising this as something that needs to be solved.

1 Like

What’s making the chain grow so fast is mostly not the raw blocks themselves, but the historical chain state that Stacks keeps in the MARF (it is taking ~95% of the whole space).

A simple way to think about the MARF is that every time a new block arrives and is processed, the node creates and appends (it doesn’t modify the old one!) a new version of the chain’s state.

That new version stores:

  • the things that changed in that block
  • for example, Alice sent Bob some STX, so their balances changed
  • any contract state changed by Clarity execution
  • and pointers to everything else that did not change, so the node can still reconstruct the full state at that exact block

So over time, the node is not just storing the latest balances or the latest contract state. It is storing a long history of state versions across blocks.

That is useful, because it lets the system answer historical queries like “what did the chain look like at block X?” (e.g. what was the balance of Alice at block X), but it is also what takes up most of the disk.

The other important part is that growth depends on both how many blocks we produce and how much activity those blocks contain. More blocks means more state versions. More transactions and more Clarity execution means more state changes inside those versions. In practice, most of the space comes from Clarity-related state/history.

And yes, Nakamoto is the main reason the growth rate increased so much. Before Nakamoto, Stacks effectively produced one block per Bitcoin block, so roughly one every 10 minutes. Since Nakamoto, blocks come every few seconds. So compared to before Nakamoto, the chain is producing MARF tries much more frequently now than it was then.

The data I have from Feb 20 lines up with that:

  • last 30 days: +82 GB → ~2.73 GB/day
  • last year: +686 GB → ~1.88 GB/day
  • one year prior the node was only using about 321 GB

The reason why the last 30 days show a higherr daily growth rate than the 1 year average is both because there is more activity and also because since the initial Nakamoto release we improved block production, and now we average around one block every 5 seconds or less.

Hope it helps!

2 Likes

Curious if there are measurements on the expected storage reduction after pruning, and
whether benchmarking has been done on a smaller MARF improving contract read/write
performance.

1 Like

Does pruning old state (removing deep at-block access) increase the risk of a one-block liquidity theft, say, minting and bridging out millions in stablecoins. Before the chain halts?

Shouldn’t Signers and/or Miners be required to have full nodes?

Will this be review by Governance CAB too?