Adding rate limiting to the AcousticBrainz API

Hi all,
Over the last few years the load on the AcousticBrainz servers has steadily been increasing, and unfortunately we’ve got to the point where it’s affecting the stability of the site for all users. As a result of this we’ve been doing some work to reduce the load on the servers.
This week we released some optimisations making lookup requests and bulk requests faster. We’ve seen a 50% reduction in response time to API queries and a clear reduction in load on our database server.

In the coming weeks we’re also going to implement rate limiting on the AcousticBrainz API. This will be similar to the ListenBrainz rate limiting, where we will allow a set number of queries per IP address per time window. Responses will include information in the headers about the number of requests remaining and the size of the request window.

Our initial plan is to allow 10 queries every 10 seconds per IP address (an average of 1 query per second). We have performed an analysis of our current traffic and almost all clients that access our API are under this threshold. Note that you can still perform bursty lookups (e.g. 5 queries in quick succession) as long as you stay under the overall limit. Remember that our bulk lookup queries allow you to request data for up to 25 items at once, with the same rate limit constraints.

If you have software that accesses the AcousticBrainz API, please ensure that it can understand HTTP429 responses and the X-RateLimit-Remaining response headers. If possible, consider using bulk lookup queries to minimise the number of queries that you make to us.

We’re also going to prioritise getting regular data dumps out in the coming months for people who want to store AB data themselves without making API lookups.

We will update this thread when we finish releasing rate limiting. Please let us know if you have any questions about this feature.


I would ask that the fairest thing to do would be to make the full dump available before introducing rate limiting, then this gives users a way to preserve their service.

At the moment our priority is ensuring that this server doesn’t fall over from having a too high load. We would prefer that a small number of queries get limited rather than having the entire service unavailable because of an application or server failure.

In practical terms, we’re not immediately releasing this rate limiting. We are giving a few weeks which we hope will will be enough to allow users of the API to make any modifications to their software. We believe that 25 queries per second per IP address is more than fair, and the API is not going away, so there will continue to be constant service. As far as I understand, your main software applications are client apps that users run on their own computers, therefore there should already be a pretty good distribution of queries from different IP addresses. Our analysis of our logs already show that this limit will only affect a very few clients who are disproportionately accessing the API.

As I’ve explicitly said twice in the last month, dumps are on our roadmap and we’re hoping to get them out as soon as all of the volunteers on this project can coordinate time together to finish development and testing.


I understand you need to something in fact that I did say back in September 2016 that dumps should be fixed to prevent overloading of the api. I will certainly introduce rate limiting to the applications. However, its one thing for me to make a code fix and quite another making a release available and getting everyone to update to that release, if you have to do it so soon so be it but I don’t think a couple of weeks notice is really adequate.

I am also confused about what the limit is, in the first post you say the limit is 10 queries per 10 seconds (i.e 1 query per second), but in this last post you say 25 queries per second, what is it ?

Implementing fixed rate throttling is what I intend to do for now, understanding variable rate limiting is considerably more complicated

This is what you will want to do eventually, as the plan is for all MetaBrainz services to switch to this model internally as well, so the number of allowed requests can dynamically be decreased during stressed periods and increased during calm periods. It’s of course perfectly fine to start with implementing the static method right now—just saying that at some point you will want to switch to the variable one in any case. :slight_smile:


Sorry, I mixed queries and recordings in this statement a little unclearly.
We’ll accept 10 queries every 10 seconds, and a query can include up to 25 recordings. This means an average of 1 query per second, resulting in up to 25 recordings per second.

1 Like