Default Gaia Hub is triggering 429 error responses very easily

Suddenly, in the last couple of days, the default Gaia hub is responding with 429 errors very frequently. This is rendering our app unusable for some users, in the same code that’s been working for months.

Two simultaneous POST putFile requests start triggering 429s. Some may go through, but most requests start failing. This is the pattern we’re seeing: https://imgur.com/7oHAdgL

Was there some change in the default Gaia Hub config? Is this some recent behaviour?

This is the likely culprit

@jwiley also mentioned this privately:

If the total used space of a hub exceeds 500mb in total (calculated 4x per day), they are subject to a throttling which was outlined in the forum a while back.

I was aware of the 10GBs max size, and the throttling on those situations. But this is a new behaviour we’re experiencing. It’s happening in accounts with low usage, in very small uploads, and somewhat randomly (some POSTs go through).

That 500MB throttle seems to be new information as well.

Plus, both the 10GB and the 500MB limits are not advertised in any way on the API. How can we handle these cases for users? At the first random 429 response we get should tell them to slow down? Or migrate to a new hub, although there are not tools for it yet?

I totally understand the reasons behind the limit, but it needs to be predictable, so we can communicate properly with our users and set expectations.

1 Like

I dont see any mention of a 500MB limit in the June post, and this is just recently been happening to me also, even though I’ve actually been deleting data.

What recent changes have been made?

@dant I was going off memory, I’ll check the number when i can get to a computer.

Regardless, i think this is the reason for the 429s, but I’ll check as soon as I’m able to.

the code is set to ignore any hub until it reaches 512MB, once it hits that limit the rate-limiting kicks in using logic similar to this: https://raw.githubusercontent.com/blockstack/atlas/master/rate-limiting/rate-limit.lua

@jwiley this 512MB limit is definitely new, or is was only enabled recently.

We’ve tested with hubs bigger than 512MB in the past and at most we’ve experienced some delays. The only we got 429 responses before, was when trying out too many requests in parallel. But never under normal usage. Now any 2 simultaneous requests trigger 429 errors.

If we had a warning on this change before, we could have prepared for it. Now we basically have a product being used by thousands of people with no expectations of what can trigger issues. And we’ll have to deal with this while on holidays…

And still this limits and throttles are not advertised in any way in the API. We can’t anticipate them, and set expectations, only deal with them afterwards.

I went and deleted everything from my hub and still see the errors.

Yes, this is possible to be the case since we’re not keeping the records up to date when a write/delete happens. We’re currently caluclating the hub sizes on a schedule of 4x per day.

if you want to share your hub with me privately on slack jwiley, I can look into it further to see if something else is going on.

that said, a fix was just deployed to align with the original forum post so throttling is only triggered once the 10GB threshold is triggered.

Yep. I’ve seen this behavior with one of my blockstack IDs. I think it’s ID dependant. I’ve removed all files from an Envelop account and still the problem persists. But then on other accounts, with many files, it works correctly. Strange behavior.