I’m new to acousticbrainz, I discovered it through Musicbrainz and the beets plugin to submit acousticbrainz data. I’m trying to read recording pages, and I have a few questions: First, what is the youtube link? Where does it come from? Sometime it is accurate like here http://acousticbrainz.org/3d1a7c1c-7445-443a-8afc-31a382bcd88a?n=4 but sometimes it is completely off (like here: http://acousticbrainz.org/5810528d-95ea-4976-b761-c8e4ef4a728f ) . Another question I have is about the ‘voice’ parameter: how should I understand it? It seems pretty random on the few examples I looked at: this http://acousticbrainz.org/3d1a7c1c-7445-443a-8afc-31a382bcd88a?n=4 is a choral with orchestra (ouverture of an oratorio), but it says it is instrumental with what I would interpret at first glance as a pretty high probability. How should I interpret this?
The youtube links are just a rough guess. All we do is tell youtube the name of the artist and of the track and take the first result that its search returns. This means that we don’t really have any control over the response.
In the future we’d like to take a youtube link to the recording if it exists on MusicBrainz.
There is some more information about the voice/instrumental classifier on our website: https://acousticbrainz.org/datasets/accuracy#voice_instrumental
The way that this kind of system works is that we provide a lot of examples of recordings with voice, and a lot of examples of recordings without voice. However, I’ve just checked this dataset and it’s mostly popular music. This means that I’m not surprised that it gets orchestral/choir music confused. A great addition to AcousticBrainz would be a classifier which can more accurately detect the presence/absence of a choir. I’ll add this to the list of things that we’d like to add!
Thanks for your response! I wonder: If someday you choose to add more examples of music with voice, would it then be necessary to scan all recordings again or are you gathering enough information that you can process it even after some time? Else it wouldn’t help much to add any classical music from what I get. Could it be possible to use the submitted data? Many recordings have works linked to them (especially in the classical case) that could be classified rather easily and then be used to train the system.
The way that the AcousticBrainz data works is that we are able to create new classification systems without distributing new software to re-scan the music.
This is implemented in our work on datasets. You can make your own dataset at https://acousticbrainz.org/datasets/create, giving examples of recording MBIDs with and without vocals. This could be just orchestral music, or just music from a specific era (e.g. only renaissance, classical, romantic, etc).
With this system you can test the accuracy of the dataset to see how well it distinguishes between each category. Use the “Evaluate” button after you’ve saved the dataset.
Do you have some ideas of example recordings that we could use to build such a dataset? If you don’t want to select MBIDs one at a time then we can automatically generate these lists (e.g. by looking at relationship information in MusicBrainz)
Actually, what does ‘instrumental/voice’ mean? How would for example an ouverture of a bach cantata like https://musicbrainz.org/work/07472580-468d-3de2-bb69-8f5917c2e731 fit into this, with its orchestra prelude followed by a choir accompanied by orchestra?
You’re asking all the right questions!
A limitation of our current approach is that our current system generates data that represents the overall average of the entire recording. This means that we typically have enough data to determine the general timbre (genre) of a recording, or if voice is present/absent in the entire recording, but there is typically not enough information to split a recording up and say that some parts have voice and some don’t. This is a limitation of our process that we do in order to save on the amount of data that we generate. We’re working on some ideas to be able to do this in more detail in the future.
So does it make sense to create a dataset with exactly this kind of music (which makes up for a huge chunk of classical music), or will the system not be able to process it anyway and I should wait until you switch to an approach better suited to classical music?
It absolutely makes sense to create these datasets! We don’t need to have the technology ready in order to start working on these. If you want to help us then we’d appreciate any efforts that you could make towards making new datasets. As we start working on new technology we can use these to help learn new things.
Do you have any specific type of tasks that you want to help with (only instrumental/vocal orchestral works? works that have both vocals and instrumental interspersed?, …)
I don’t know, what do you need? What do you consider vocal? Should there be only a voice, without instruments? It seems to me (genuinely) that both vocals and instrumental interspersed would be harder to categorize, should something be vocal as long as there is some singing at any point? What about gender/timbre for vocal works? Does it corresponds to female/male voice? How do we deal with choirs? Countertenors? Are Chopin Waltzes considered danceable, where they were not written for dancing, but are waltzes nonetheless (as far as I know)? How about works that modulate, or recordings using period instruments (with an A at 415 or whatever else)?
I have certain hearing habits that might be reflected in the recordings I choose. Do you have a guide/guidelines to make a dataset as equilibrated as possible?
BTW, how do I log in? The button ‘log in with musicbrainz’ redirects to
Mismatched redirect URI
I have another question: what is done exactly by the acousticbrainz client? Does it compute spectrograms like acoustid? Is there somewhere an explanation similar to https://oxygene.sk/2011/01/how-does-chromaprint-work/ on what’s happening with the audio data?
There’s a description here about what is generated by the acousticbrainz client: https://essentia.upf.edu/documentation/streaming_extractor_music.html
Is this too general for you? It might require a bit of specialist knowledge. It’s true that we’ve never tried to write this kind of documentation for a more general audience! We can add it to our never-ending list of tasks to complete if you’re interested in learning a bit more about it.
I’m going to work with some of the other MetaBrainz developers to try and fix the redirect URI bug that you’re seeing. I don’t see the problem myself, so I’m not 100% sure what the issue is. Can you please confirm the following things:
- First log out of Musicbrainz
- What is the full URL on AcousticBrainz when you click on Sign in (the page with the big orange “Sign in with MusicBrainz”) (it should be https://acousticbrainz.org/login/?next=%2F)
- The next page that you get to should be a musicbrainz login page. What’s the URL here? It should be something like https://musicbrainz.org/oauth2/authorize?scope=profile&state=xxxxxxxx&redirect_uri=https%3A%2F%2Facousticbrainz.org%2Flogin%2Fmusicbrainz%2Fpost&response_type=code&client_id=jsnvNZnWlqIMansSkJQGWg (I deleted the value of ‘state’)
- After logging in, a screenshot or copy of the text that you get shown on the musicbrainz page when it asks you if you want to give AcousticBrainz permission to use your musicbrainz account
If you’re unsure how to provide this information then don’t worry, I don’t want to ask you to do anything that you’re unfamiliar with.
Regarding the main question about this thread about how to actually build these datasets… I’ll get back to you about this, thanks!
Logging in works now, I don’t know what happened, but it works now.
My housemate complains that beets’ absubmit plugin uses too much bandwidth, is it possible that it is a problem (I have adsl wifi at home)? Also, submitting my collection (45 000 recordings) takes a very long time (a few weeks if it runs permanently), is it normal?
Good to hear about logging in - it’s something that people have mentioned previously, so I’m not sure if there’s some kind of error that periodically happens on our end. We’ll keep looking to see if it happens again.
A generated data file for a single recording is about 100kb in size, so 45,000 recordings could use up to 3-4gb. We appreciate your contribution to AB, but please don’t let it cause problems with your housemates! Depending on your upload speed this may take some time, so as long as you’re not annoying your housemates too much you can just leave it running for as long as it takes.
How long does it take to generate the data for a file? On my pc it seems to take a couple of seconds per file, which scales up quickly. Is it because you don’t allow more than one query per second, as musicbrainz does?
That’s right, it takes 5-10 seconds for a ~3-4 minute song. This is normal, and is related to the number of calculations that we do on the audio. This is unrelated to any submission ratelimits (which we don’t have)
If you’re on a Linux (or other Unix-based) machine, you can try and use
parallel to allow multipe
abzsubmit instances to run in, well, parallel. I have used
find ./ -name '*.flac' -print0 | parallel --null --eta abzsubmit in the past, but you may want to adjust the
find parameters to pick up whatever media files you’re scanning.