Rate limiter has changed?

Just an update, getting about 80% failures now with picard using the public api.

Tested with a few mirror servers and 0% failures.

I hesitate to add this. I use Musicbrainz only for data for use when ripping with abcde. Lately I’ve seen a huge uptick in the number of 503 results. Since it takes several minutes to rip a disc, I know that the 503 results are not a result of my activity. Further because of the number of 503 errors I was getting and because of the time it took to restart the rip, I hacked the abcde code so that a 503 result is re-tried immediately. Since then I have only had one disc go through the 30 re-tries I allow before failure. On the one hand, yea for me; on the other, by trying to limit access you may actually increase it.

JR

2 Likes

That’s always a danger when creating chokepoints in any kind of traffic (vehicles, people, networks, …). But as stated

the main focus of rate limiting are not taggers, but organisations sucking down data.

(Maybe some kind of proof-of-work will have to be rolled into the next generation of the web services…?)

2 Likes

I hope by “re-tried immediately” you mean “re-tried after at least 1 second delay” - otherwise, you’re abusing the system yourself

2 Likes

So things have been really good since last last year, but the failure rate loading releases in to Picard has gotten pretty high again over the last couple of weeks. I though there was “retry on failure” functionality built in to Picard, has the failure rate just gotten higher than it can handle or is something else going on that is bringing back this old problem?

As you found out, the feature is not yet included: https://tickets.metabrainz.org/browse/PICARD-807