Accuracy / explanation for Acousticbrainz entry

Could please someone explain the meaning for this entry in Acousticbrainz:
https://acousticbrainz.org/fe11a69a-61ea-4c0f-9eef-d48f6dda4ca4
If you hear and see the video, I don’t understand how this song get
SMIR04 Rhythm: "ChaChaCha"
or
Dortmund model: "electronic"
or
Party: "party"
or
Sad: “not sad”

Any ideas how this values will be calculated?

Yes, for a lot of my music this data is as far off as this example. From my limited understanding as a user without any deeper audio analysis knowledge, the algorithms used to get the data where trained on a limited data set. That’s especially clear with the genres, were there are different models applied to deduce the genre, see also the blog entry at https://blog.musicbrainz.org/2014/11/21/what-do-650000-files-look-like-anyway/ and the comments there.

As I understood it, part of the goals with AcousticBrainz is to apply the algorithms and models to a larger data set to allow researchers and other interested parties to analyse the results and improve upon it.

The documentation of the Essentia toolkit provides also some background: http://essentia.upf.edu/documentation/

I hope this is about right, but some people involved in AcousticBrainz can for sure give a more in depth answer and correct me :slight_smile:

6 Likes