Queries about the plugin APIs

Tags: #<Tag:0x00007fe848bf4dd8> #<Tag:0x00007fe848bf4ce8>

Hello there,
Recently, I’ve been looking forward to create some plugins for Picard to supercharge my tagging setup. Now since the plugin APIs are barely documented, I tried to make sense of the whole thing by going through some other plugins and the threads tagged with picard-plugin-development. This brings me to my queries that probably haven’t been answered publicly unless I missed some stuff.

  1. How do I offload blocking tasks off of the UI thread but be able to receive the changes within Picard? Is it not possible at all since the ReplayGain plugin mentioned here cites that as one of the issues of the plugin. Under the same thread Sophist mentions:

Instead the plan is to use ffmpeg to calculate the actual values and return them to Picard, which will put them into metadata waiting for a save.

I’m curious as to how that would work, using Python bindings for FFMPEG? Even then, there can be things that are native within the Picard plugin ecosystem itself (Dynamic range calculation, for example), but are heavy enough to block the UI thread and make Picard unresponsive. I’m looking for a way around that.

  1. In reference to the last question, I want to use the requests library for certain plugins instead of Picard’s own WS API. (My Picard is installed using pip) I’m trying to avoid using requests as much as possible but do keep in mind that a lot of the existing API wrappers use requests for their WS and rewriting them from scratch is pretty much re-inventing the wheel. Here’s the dilemma: if I want to make a request that does extra optional queries based on the response of the previous queries, is my only option to keep on tacking separate functions as the callback for tagger.webservice.get? Here’s something I already made using the template of the “Album Artist Website” plugin.
    The only saving grace here is that each area query is structurally the same, so I was able to call the same methods in a cycle. But in future, I’m bound to expect conditional queries that have different structures (host/port/path) and I am wondering if there is a better way to streamline everything. requests makes the code readable but I don’t want to block the UI.

  2. In reference to the “Album Artist Website” plugin, in what cases would we use register_track_metadata_processor and what other cases warrant register_album_metadata_processor? An album artist is common to all the tracks in the album, so it made more sense intuitively for me to use register_album_metadata_processor. But as I can see in the plugin mentioned above, the track API was used instead. If the data was found in the cache, it returned that to the track metadata immediately. But if it had to be fetched, it got the last track object and sent it to function that called the WS API. I’m still hazy on the details of the inner workings:

  • Why was register_album_metadata_processor not used?
  • Is add_artist_website() called for every track? If so, then would I be able to pass track metadata along the functions instead of creating a queueing mechanism? (Only asking as a question, doing so would probably result in lower code legibility)
  • What’s the purpose of album._requests += 1 or album._requests -= 1? My UI status bar does not increase the web request number even when this is kept. (I’m aware that the object is a tagger object)
  • What’s the purpose of track.iterfiles(True) and applying a separate file metadata in addition to the track metadata? Why was it not used when the data was fetched from the cache?
  1. Can I achieve granular control over plugin priorities without merging multiple plugins with different purposes into the same plugin? LOW and HIGH are not enough to meet the order in which I want the plugins to execute reliably.

Thanks in advance. I’m aware that these are a ton of questions, but the Picard plugin creator community is so niche that I couldn’t find answers through searching.

1 Like

Only answering briefly as I don’t have access to my laptop and only mediocre internet access currently:

You should run such tasks in a separate thread. The recommended way is to use picard.util.thread.run_task for this. See e.g. the BPM plugin https://github.com/metabrainz/picard-plugins/blob/2.0/plugins/bpm/init.py

It’s recommended to use Picard’s web service module. That way requests show up in the request count, and it is alread not blocking the UI thread. If you use requests then make sure to run it in a separate thread.

Not quite sure what the question is. But if you want to do different things after a request finished you need separate callbacks, if you want to do the same thing you can use the same callback.

The album metadata processor runs after album data has been loaded and parsed, but before individual tracks get filler. That means you get access to the raw data loaded for the release from MusicBrainz, and you can manipulate the metadata object of the album (which will be the basis for the individual tracks).

The track metadata processors get called per track. You get access to the raw data from MB for that track.

Which one you want to use depends on what you want to do. In some cases it can even make sense to use both, e.g. the current AcousticBrainz plugin does this: https://github.com/metabrainz/picard-plugins/blob/2.0/plugins/acousticbrainz/init.py

Only skimming over it I think the artist website plugin could probably use the album metadata processor

Yes, that’s the purpose of a track metadata processor.

Don’t understand the question. The track metadata processor gets a metadata object passed for the track.

It’s the current way to tell the album object when it is finished loading. If album._finalize_loading is called and the counter is zero it finishes loading the album. The counter should be increased when an asynchronous task runs, and decreased when it finishes. There is an open ticket to add a proper API for this counting.

It is unrelated to the open web service request counter, that’s only counting web requests.

This iterates over all files of the track, and it ensures all files already linked to the track get the change in metadata ad well. The save=True parameter here is actually not used. On an album it would cause to only iterate over files that are matched to tracks, not unmatched files.

No

5 Likes

Thanks for the response!

You should run such tasks in a separate thread. The recommended way is to use picard.util.thread.run_task for this.

I forgot to mention one crucial thing amidst all the noise I posted. I did try thread.run_task() before asking here but the stuff it fetched didn’t appear in Picard in some of the tracks (Log shows that it fetched them fine). I’m handling rate-limiting using time.sleep(2) and Picard seems to be failing to pick up the retried requests even when the response came out fine after 2 seconds in the log. Am I missing something here? The reason why I am using requests in this case is because the API needs requests from the same session as authentication (Deals with CSRF tokens) and I’m not sure if Picard WS has that. I’m assuming that time.sleep() is the culprit (?)

Not quite sure what the question is. But if you want to do different things after a request finished you need separate callbacks, if you want to do the same thing you can use the same callback.

That’s pretty much the answer I wanted.

Don’t understand the question. The track metadata processor gets a metadata object passed for the track.

Well, the plugin I mentioned kept a global queue instead of passing the track metadata object throughout the functions. But since you’ve already mentioned that the track metadata processor is called per track, I’ve already gotten my answer.

It is unrelated to the open web service request counter, that’s only counting web requests.

Thanks, that cleared up the confusion. I was correlating it to the wrong metric.

This iterates over all files of the track, and it ensures all files already linked to the track get the change in metadata ad well. The save=True parameter here is actually not used. On an album it would cause to only iterate over files that are matched to tracks, not unmatched files.

I completely forgot that a track in Picard can have multiple files attached to it, my bad.

1 Like

Just speculating: if you do changes to the metadata and don’t see it you probably just need an update to the UI. Maybe a track.update() fixes this.

2 Likes

Figured that I could simply stall the album loading using _requests and _finalize_loading() which improved the situation. It’s still missing a track at times. Most of the update() calls I saw in the repo were for context menu option plugins which do file.update(), album.update() and cluster.update(). I’m probably updating the wrong object. Here’s the code snippet:

class Lyriks:

    def __init__(self):
        self.deez_api = DeezerAPI.get_instance()

    def process_lyrics(self, tagger, track_metadata, track_node, release_node):
        thread.run_task(
            partial(self.fetch_lyrics, tagger, track_metadata),
            partial(self.apply_lyrics, tagger, track_metadata, track_node, release_node)
        )

    def fetch_lyrics(self, tagger, track_metadata):
        if track_metadata:
            isrcs = track_metadata.getall('isrc')
            for isrc in isrcs:
                tagger._requests += 1
                lyrics = self.deez_api.get_lyrics_isrc(isrc)
                track_metadata['lyrics'] = lyrics
                log.debug("%s: ISRC: %s, lyrics = %s", PLUGIN_NAME, isrc, lyrics)
                return lyrics
        return None

    @staticmethod
    def apply_lyrics(tagger, track_metadata, track, release, result=None, error=None):
        if error:
            return
        else:
            track_metadata['lyrics'] = result
            track.update()  # Does nothing
            release.update()  # Does nothing
            tagger.update()  # Does nothing
            # track_metadata.update()  # Crashes, needs an argument
            # track_metadata.update('lyrics')  # Crashes, "ValueError: dictionary update sequence element #0 has length 1; 2 is required"
        tagger._requests -= 1
        tagger._finalize_loading(None)


register_track_metadata_processor(Lyriks().process_lyrics)

Any ideas as to how to fix it?

1 Like