I have no idea what the cause of this is, but I do wonder whether it is a result of Picard attempting to summarise the metadata for 67,670 tracks which is CPU intensive…
As (literally) a world expert on Python’s multi-threading performance issues, I can say that this is a likely cause IF the disk slowdown only happens during the metadata summarization. (In essence, Python’s GIL implementation uses “competition” between threads to decide which thread gets the GIL next, and this does not handle CPU-heavy threads well - in particular I/O threads which do a small amount of CPU and then wait for I/O again should ideally get priority, with the CPU-heavy thread soaking up whatever CPU is left over, but instead the CPU-heavy threads get more than their fair share of CPU and I/O threads end-up waiting, thus reducing the I/O throughput substantially. I have undertaken benchmarks and proved that this is an issue, but despite this the Python leadership have not been interested in fixing this decade-long issue.)
Nope. Picard is a complex multi-threaded app and performance tuning is an art, not helped by Python GIL issues.
It needs to perform reasonably across a range of user environment from slow processors / slow spinning disks, to fast processors with fast SSD and including remote / network accessed file systems (Windows file shares, SAMBA etc.) and running on Windows / Linux and Mac O/S - and all from the same single code base. We cannot tune for one environment at the expense of others - instead we need to try to achieve reasonably good performance relative to raw capability across all of the variants.