Very frustrating

Tags: #<Tag:0x00007f75703614a0> #<Tag:0x00007f7570361248> #<Tag:0x00007f7570360f00>

I decided to tidy up my library, 6000 or so FLAC files. I spent the best part of 12 hours using Picard to sort out my files. I came to save all my changes and disaster - Picard crashed with a spinning beachball. I left it for some time to see if it would come to life and no absolutely nothing. I restarted Picard hoping to see that it would have still kept all my changes without yet saving them but no. All my hard work has gone. I really can’t be bothered to go through all that again.

Should I have been saving small numbers of files as I went along instead?

Thanks for any help.

Picard 1.4.2

OS 10.13.3
Intel Core i7 4 GHz
AMD Radeon R9 M395 2 GB


Yes, totally! Actually, that’s true for any software: don’t make too long sessions without saving.
Sorry to hear about this disaster. There is work in progress to Warn people who try to load very large amounts of files.


Hi - Thanks. Well yes that would’ve completely prevented me from losing all my work. I really wasn’t expecting a complete crash and had no idea Picard would have problems coping with 5k files. I can’t be bothered going through all that again so my library will remain as it was pre Picard. Oh well, you live and learn. I’ll use something a little more robust next time.

1 Like

That sucks! So sorry that happened to you! Did the programme actually close, though, or was it just unresponsive to input? When Picard is saving loads and loads of files, it can become unresponsive (with beachball). On Linux systems it sometimes even returns a “This program has become unresponsive” message, but that is because its resources are almost entirely committed to writing files. In either case, this is something that might be handled better.

Yes it certainly does suck!

Picard didn’t close. It had the spinning beachball for ages and when I checked in Activity Monitor it said Picard had become unresponsive. It seems to have real problems with multiple files. Which seems bizarre given that you’re naturally going to be dealing with multiple files in the first place. I’m very hesitant to use it again, which is a shame as I quite liked it.

It doesn’t seem right to me that 5000 files should cause a problem. 50,000 maybe, but Picard ought not to lock up at just 5000. A ticket that says

Fairly often we get messages from users like “I loaded 50k files on Picard and it’s going very slowly”. We should have some sort of alert for these when someone tries to load more than 5k or 10k files in one go

seems pretty vague to me and not addressing the underlying problem. The cause of the lock-ups/hangs has not been explained. Frequently, it is not a crash, but just a (longish) temporary non-responsiveness, which may be reduced by increasing the priority of the process.
Picard is a great program (and free) but the random nature of this problem is its most irritating feature and puts off a good many new users who might otherwise become useful contributors to the MusicBrainz community.
That said, one should ask why load even 5,000 tracks in one go? If they are only scantily tagged and the hope is that Picard will fix them, then that is a bit optimistic. Even assuming they are all in the MusicBrainz database, some human review will be required afterwards, so some smaller chunking would make the task easier. If they already have MBIDs and it is just a retagging exercise, however, then Picard ought not to complain.
I suspect that this is not a simple fix and am reluctant to raise a new ticket given the current backlog. Also, I am aware that v2.0 is still in development - might it be possible that this will operate a little more robustly?

It had the spinning beachball for ages and when I checked in Activity Monitor it said Picard had become unresponsive.

It is highly likely that Picard was still working. It can take a long time to amend that many files. The interface lockups are an unresolved problem, but despite the OS reports of unresponsiveness, the program is working. I get the same thing from Calibre when performing large batch operations. As @yvanzo said, it’s wise to save as you go.


The approach we take with Songkong is to save changes on at an album level as it goes along, maybe Picard could take a similar approach ?

How frustrating!

I echo the other people’s comments: Picard is very good, but my experience is that with Picard, as with many other pieces of software, it’s always wise to save work yourself a few times an hour, or do your work in small enough batches that you limit how much work you can lose between saves.

If you would like to contribute to Picard’s development, you could perhaps copy this post into a comment on a relevant Picard ticket, . Then the developers who work on fixing that problem will be able to consider your experience.

I encourage you to go for a walk, enjoy the day, and then sometime come back to your music library and try again. But this time, take it 20 files at a time or so, and save each batch before starting the next.

Good luck!

1 Like

That seems the right approach to me - coupled with a button the user can hit which will pause/stop it when the next album is done.

1 Like