I’ve been using MusicBrainz Picard for a while now to clean up and tag my music library, and I really appreciate how well it handles large collections. Recently, I’ve started learning more about machine learning and how it can be applied to music metadata..like predicting genres, moods, or even suggesting better tags based on audio features.
Has anyone here ever exported Picard-tagged libraries to use in ML projects? I’m curious if the metadata it collects could help train models, especially when combined with acoustic data.
While exploring this, I came across an MLOps Course that covers the full machine learning pipeline..including deployment and monitoring..which got me thinking about how to automate and scale something like this with music data.
If anyone’s experimented with using Picard data in ML workflows or knows of related projects, I’d love to hear more. Also open to ideas on how to structure metadata for training purposes.
The only way I could see it being used in a useful way would be to feed it tracks that are already tagged with the correct genre, mood etc so the model would “listen” to the recording and match it to the existing genres, moods etc. Then it would learn what particular recordings match with particular genres and then it could leverage that data to automatically tag recordings that are missing the genre, mood tags? But I ain’t no expert!!!
this sounds like (at least generally) what the now defunct AcousticBrainz project was doing before it got dropped due to consistently meh results (especially outside of popular music and genres). I don’t think AcousticBrainz used machine learning tho…