AcousticBrainz Extractor as part of Picard?

Hi

I know that Picard can do the AcousticID calculactions within the client (which is super helpful), however there is still a seperate utility for calculating the AcousticBrainz part - would it ever be possible for that to be part of Picard, thus meaning I only have to work with one program?

Kind regards

3 Likes

Definitely. This was already discusseded on IRC briefly quite a while back. But nothing concrete yet.

5 Likes

cool thanks for the update :smiley:

Looking into it.

Should we scan/lookup the files before running the extractor?
Or just run it with already matched files (with MB RecordingId)?

Single button to extract and send?

Do we create a temporary files named with the file hash to prevent re-extracting unsubmitted results? Or use a database? Or just re-extract the features?

2 Likes

Should we scan/lookup the files before running the extractor?
Or just run it with already matched files (with MB RecordingId)?
AFAIK the current AcousticBrainz utility I have only works on music files that have the MBIDā€™s encoded

Single button to extract and send?
Yes I was thinking similar to the ā€œSubmit AcousticIDsā€ button

Yes, but it only does that because it doesnā€™t have the infrastructure to identify the file first.
We do have that in Picard. ĀÆ\(惄)/ĀÆ

Iā€™d run that on matched files only. You need the MBIDs, so the user should do the identification first.

The alternative would be to allow calculation always, but have separate submission once files have been matched. That word be similar to AcoustID ā€œScanā€ behavior. But the difference is that ā€œScanā€ provides value in itself, where a pure AcousticBrainz calculation probably does not. So Iā€™d do the analysis and submission in one step.

Remembering already submitted files is useful, as feature extraction is very slow. The current AB submission utility remembers files already submitted. I think it does so by path. I wonder if we can do better.

What about storing a touple of recording ID / AcoustID fingerprint / length? The AcoustID fingerprint is comparable cheap to calculate with fpcalc. Maybe also some information on the audio codec would be needed, not sure.

Iā€™d probably store the submission info in a dedicated file.

2 Likes

OK.

Should we double check the extracted features match the file recording ID? We can always check if the file has pending changes that need to be saved beforehand, but double checking we are not sending borked data to the AB server seems like a good idea.

Okie dokie.

Hmm, I think I need to test this first. I think Iā€™ve already seen same AcoustIDs for different recordings of the same song by the same artist. The length could be enough to differentiate, but I think the recordingID + file hash would be a safer choice.

Question about the UI: not sure if there is an icon for AB features. Iā€™m using AB logo itself as a placeholder. Also, not sure how to make the text smaller (features->data? AcousticBrainz->AB?).
image

Hmm, while trying to package the Essentia/AB extractor with Picard, had some issues with the download. Maybe it is the ftp issue or it just didnā€™t like the certificate. Not sure.

Both wget and DownloadFile complained.

ERROR: cannot verify ftp.acousticbrainz.org's certificate, issued by ā€˜CN=(STAGING) Artificial Apricot R3,O=(STAGING) Let's Encrypt,C=USā€™: Unable to locally verify the issuer's authority.

Exception calling "DownloadFile" with "2" argument(s): "The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel."

The browser also complains about the certificate on https://ftp.acousticbrainz.org/ . @alastairp @Zas can you guys check that?

@gabrielcarvfer in the meantime I would say letā€™s fetch that ignoring certificate issues (e.g. --no-check-certificate with wget). We can change that back as soon as the certificate is fixed.

How would you double check if extracted features match the recording ID?

Hence I though not using AcoustID (we often would not have one in the tags anyway) but the actual AcoustID fingerprint as returned by fpcalc. I think that would work, except for the cases where the first 2 minutes (I think, not sure, would need to check) analysed audio is exactly the same. But then the length (with milliseconds) would also need to match exactly.

1 Like

One thing that concerns me a bit is the size of the essentia extractor. This will increase the app size rather significantly :frowning:

What about a plugin for those who need it/want to contribute?

A plugin would not really solve this. For one there is not real mechanism to provide an executable as part of a plugin, so that would need some custom handling of extracting the executable from the plugin archive. And if the plugin comes with the extractor executable it would need to provide this for every supported platform. In addition a plugin provides less flexibility to deeply integrate this feature.

But probably would be an option to include downloading the extractor on first use from within Picard.

Thinking about multiple platforms I think there should also be a way to completely disable the functionality for builds on platforms where the essentia extractor is not available. Similar to how Picard can be build with update notification disabled.

1 Like

It should now be fixed.

3 Likes

Reading the recording ID from the features file and comparing with the file.metadata. It was just to make sure pending changes to the recording ID were actually saved to the file.

We could check that with Picardā€™s logic instead.

Oh, I see. I misunderstood that part.

OK, working on it.

2 Likes

@zas @outsidecontext I am migrating the submission of extracted features from the ā€œrequestsā€ module to Picardā€™s webservice. The webservice.get callback worked just fine, but didnā€™t manage to make the webservice.post callback to work. Not sure if doing something wrong.

e.g.
# 3 last arguments were copied from oauth.on_exchange_authorization_code_finished, that also is used for a .post callback
def submission_callback(tagger, file, data, http, error): 
    if not error:
        file.acousticbrainz_is_duplicate = True
        tagger.log.debug("AcousticBrainz features were successfully submitted: %s" % file.filename)
    else:
        tagger.log.debug("AcousticBrainz features were not submitted: %s" % file.filename)


def submit_features(tagger, file, features_file):
    with open(features_file, "r", encoding="utf-8") as f:
        features = json.load(f)
    featstr = json.dumps(features)

    tagger.webservice.post(ACOUSTICBRAINZ_HOST,
                          ACOUSTICBRAINZ_PORT,
                          "/%s/low-level" % file.metadata["musicbrainz_recordingid"],
                          featstr,
                          partial(submission_callback, tagger, file),  # never called
                          priority=True,
                          important=False,
                          parse_response_type=None)

Looks ok to me, at least I donā€™t see anything abvious. Seems to be the correct parameters. Are you sure the callback never fires and the webservice.post actually gets called? Maybe some hidden exception somewhere?

The post is called and logged, as expected. Didnā€™t get any exception trace. Seems like the request doesnā€™t get a reply nor error out. Very weird.

I wouldnā€™t rule out some incompatibility with the AB server. The webservice.post in Picard isnā€™t used that much, could well be a bug lurks somewhere there.

What response is expected? Maybe it makes a difference if you use it with parse_response_type set to something.