Poor cooperation of Picard 2.10 with the disk



Windows 11 Professional, Picard 2.10 portable default settings, 15k MP3 files without tags, everything is up to date.

An issue already noticed in Picard 2.9

Applies to saving tags in the last phase of Picard use.

When you select all files in the right pane and click Save, the remaining time is estimated and increases to above 12 minutes.

Transfer read/write is about 60MB/s / 60MB/s

Then I click anywhere in the right panel and the disk transfer quickly increases, the remaining time quickly decreases.

Transfer is about 300MB/300MB per second.

I think it will also be possible to reproduce this problem on a 5k sample.

I have no idea what the cause of this is, but I do wonder whether it is a result of Picard attempting to summarise the metadata for 67,670 tracks which is CPU intensive…

@outsidecontext ?

1 Like

Technical details for the drive c hardware:

“512 GB SSD, PM991a, M.2 2242, PCIe 3.0 x4 NVMe”

@InvisibleMan78 I know the parameters of my disk. :wink:

Please add an issue to tickets.metabrainz.org . The forum post will be buried under a load of forum activity soon.

Likely yes

As (literally) a world expert on Python’s multi-threading performance issues, I can say that this is a likely cause IF the disk slowdown only happens during the metadata summarization. (In essence, Python’s GIL implementation uses “competition” between threads to decide which thread gets the GIL next, and this does not handle CPU-heavy threads well - in particular I/O threads which do a small amount of CPU and then wait for I/O again should ideally get priority, with the CPU-heavy thread soaking up whatever CPU is left over, but instead the CPU-heavy threads get more than their fair share of CPU and I/O threads end-up waiting, thus reducing the I/O throughput substantially. I have undertaken benchmarks and proved that this is an issue, but despite this the Python leadership have not been interested in fixing this decade-long issue.)


@Sophist my disk can do more.

CrystalDiskMark results:


CrystalDiskMark 8.0.4 x64 (C) 2007-2021 hiyohiyo
Crystal Dew World: https://crystalmark.info/

  • MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
  • KB = 1000 bytes, KiB = 1024 bytes

SEQ 1MiB (Q= 8, T= 1): 2694.313 MB/s [ 2569.5 IOPS] < 3106.72 us>
SEQ 1MiB (Q= 1, T= 1): 1843.561 MB/s [ 1758.2 IOPS] < 568.52 us>
RND 4KiB (Q= 32, T= 1): 620.069 MB/s [ 151384.0 IOPS] < 204.64 us>
RND 4KiB (Q= 1, T= 1): 61.593 MB/s [ 15037.4 IOPS] < 66.40 us>

SEQ 1MiB (Q= 8, T= 1): 1861.993 MB/s [ 1775.7 IOPS] < 4494.07 us>
SEQ 1MiB (Q= 1, T= 1): 1878.815 MB/s [ 1791.8 IOPS] < 557.76 us>
RND 4KiB (Q= 32, T= 1): 447.556 MB/s [ 109266.6 IOPS] < 285.22 us>
RND 4KiB (Q= 1, T= 1): 186.507 MB/s [ 45533.9 IOPS] < 21.87 us>

Profile: Default
Test: 1 GiB (x5) [C: 34% (161/476GiB)]
Mode: [Admin]
Time: Measure 5 sec / Interval 5 sec
Date: 2024/01/01 20:31:26
OS: Windows 11 Professional [10.0 Build 22631] (x64)

Even better. Setting instead of default.



Suspiciously close read value of 60MB/s.


And this IOPS:

RND 4KiB (Q= 1, T= 1): 61.593 MB/s [ 15037.4 IOPS] < 66.40 us>

Maybe this will help you.

Nope. Picard is a complex multi-threaded app and performance tuning is an art, not helped by Python GIL issues.

It needs to perform reasonably across a range of user environment from slow processors / slow spinning disks, to fast processors with fast SSD and including remote / network accessed file systems (Windows file shares, SAMBA etc.) and running on Windows / Linux and Mac O/S - and all from the same single code base. We cannot tune for one environment at the expense of others - instead we need to try to achieve reasonably good performance relative to raw capability across all of the variants.

1 Like