Could you point out which part of @rob’s post gave you the impression that nothing will be done, so that he can improve the wording?
I agree I was a little harsh and I apologize, but I had just spent over 2 hours trying to check my tags with Picard (something that used to take 5-10 minutes so I was a little frustrated.
But then Jesus2009 came up with a perfect solution thanks to me expressing my frustration so I’m glad I did.
I did not code the solution, I do not really know Perl or Python.
But I can give it a try someday.
I have created a ticket (PICARD-807).
So, is Picard usable for anyone? With over 50% of lookups failing on my last attempt to tag my library, I just gave up. I’m not interested in retrying releases over and over and over again. I’m so frustrated I’m about to give up on Musicbrainz. It no longer works for me, why should I contribute and edit.
There is an open ticket to look into the problem. https://tickets.musicbrainz.org/browse/PICARD-807
Anyone have any workarounds, etc, that would let me use Picard for tagging again?
It’s not just Picard, I’m getting a lot of failures as well. I’m glad MB is becoming a well-know and well-used tool. But I’d like it to be able to use it too… anyone running an open mirror we can use?
Have you read all this? State of things, Q2 2016
I hope an interim solution can be found, but I would rather be patient and wait for a real fix than cause the real fix to be delayed by a lot of effort going into a bandaid solution
I had not read that, and it is really what I was looking for, some insight into what is going on with the whole system. Sounds like much patience is in order.
Yep I’ve been bringing this up in IRC a few times recently. Its hard not to come over as ungrateful when bringing up problems with a free api server, so I wrote this little blog with my thoughts:
I’ve switched to http://musicbrainz-mirror.eu:5000/ for now on my personal sites, but for Kodi we can’t do that as the user base is so huge(in fact i’d say we are probably responsible for the overload) as our userbase is growing exponentially and anyone scraping music will hit the MB Web service thousands of times. We tried running a mirror but without any linux server admins it just got out of sync after a while and we took it down. I see there is a new VM which motivates me to look at this again.
It would be great if someone ran a pay mirror or something that open source projects could use at a free or discounted rate. Having it all centralized on a single server run by metabrainz foundation seems not the best design to me. The server i linked to above has 2 servers load balanced due to demand. Leveraging the cloud for this is also a good idea.
Super interesting, thanks @zag2me!
(even for a non-tech guy like me)
One of the things that might be causing a lot of load for the server is the ‘changes to your library’ email notification/ subscription/ link. It almost never loads for me, and seems to running a crazy amount of queries (assuming that’s what’s happening when it’s trying to load).
Perhaps it would be worth shutting down a few complicated operations for a short while if it means things are going to be useable again in the interim.
@zag2me : MetaBrainz is in the process to improve those things, and many of the recommandations on the blog post will be implemented.
Here are our plans:
- improve the way we currently rate limit, we are experimenting a solution based on openresty at the moment, this may be in place very soon
- increase bandwidth and use faster hardware, this is the NewHost move we’re initiating, it should be done within next 3 months
- use much faster database system, using better hardware (SSD) and read only slaves
- split web service and web sites, this is a part of the move to NewHost
- web service version 3 is planned, api key and json inside, it will come later
About the web service, we are using all the bandwidth we can afford for now, and a good part is wasted by web service abusers, the new rate limiter software will greatly improve the situation.
About Picard, i’m lacking of time to work on it, so if some python guys are around… feel free to pick up few issues in JIRA and push few PRs
Sadly I don’t think the latest changes have helped much…
I loaded 70 albums into Picard and 17 of them failed. That’s about a 20%+ failure rate I believe.
While that may fix the problem in picard, its going to just add to the API calls.
If the problem is bandwidth, why not leverage the cloud providers like cloudflare? They have made bandwidth a non issue for most big sites these days. I’m on the free tier and it saves about 3tb a month.
The API calls it will add should be negligible compared with the hammering we’re taking from external sources unrelated to Picard, so I don’t think that should be too problematic (not saying Cloudflare wouldn’t help, I have no idea about that stuff but thankfully I’m not in charge of understanding it either )
Just an update, getting about 80% failures now with picard using the public api.
Tested with a few mirror servers and 0% failures.
I hesitate to add this. I use Musicbrainz only for data for use when ripping with abcde. Lately I’ve seen a huge uptick in the number of 503 results. Since it takes several minutes to rip a disc, I know that the 503 results are not a result of my activity. Further because of the number of 503 errors I was getting and because of the time it took to restart the rip, I hacked the abcde code so that a 503 result is re-tried immediately. Since then I have only had one disc go through the 30 re-tries I allow before failure. On the one hand, yea for me; on the other, by trying to limit access you may actually increase it.
That’s always a danger when creating chokepoints in any kind of traffic (vehicles, people, networks, …). But as stated
the main focus of rate limiting are not taggers, but organisations sucking down data.
(Maybe some kind of proof-of-work will have to be rolled into the next generation of the web services…?)
I hope by “re-tried immediately” you mean “re-tried after at least 1 second delay” - otherwise, you’re abusing the system yourself
So things have been really good since last last year, but the failure rate loading releases in to Picard has gotten pretty high again over the last couple of weeks. I though there was “retry on failure” functionality built in to Picard, has the failure rate just gotten higher than it can handle or is something else going on that is bringing back this old problem?