Hey everyone,
We just released Blockstack Core 0.18.0.9 as a hotfix release. If you run a Blockstack Core node on a laptop or find yourself synchronizing often, you should upgrade to this release.
What happened?
You may have noticed some very large zone files getting announced recently that are close to the maximum allowed size of 40K. The Blockstack Core nodes around the world replicate these zone files to one another via the Atlas network, in batches of up to 100 at a time. At the same time, as an anti-DDoS measure, the Atlas protocol does not allow messages beyond a certain size. There was a bug where this maximum message size was less than the maximum possible size of a message containing 100 full zone files.
If you run a node on your laptop (like I do), you may trigger the bug because your Blockstack Core node tries to fetch too much data from Atlas at once while it is synchronizing. This was unnoticed in our tests and in our public node fleet because they usually keep in sync with one another. However, nodes running on laptops (or devices with unreliable connectivity) encounter a problem whereby they are unable to fetch more zone files because their initial request for missing zone files returned too much data. This can cause the node to get stuck, and remain unable to fetch zone files.
We urge node operators to upgrade to 0.18.0.9 as soon as possible. We have pushed it to PyPI, so it can be installed via pip install --upgrade blockstack
How are We Preventing This from Happening Again?
I have expended our test framework to specifically stress-test the maximum Atlas message lengths to ensure that a Blockstack Core node can fetch 100 full-sized zone files correctly. These tests will be run with the rest of our integration tests whenever we prepare for a new release.