I have made a start on this, following the suggested approach, but am a bit confused by the sequence of events. What I have written is (in sequence):
A) Defined a class PartLevels with a method add_worktag. This is registered via register_track_metadata_processor(PartLevels().add_worktag)
B) add_worktag gets the workId for the recording from the metadata
C) It then calls the method add_partof which contains the xml lookup namely: return tagger.tagger.xmlws.get(host, port, path, partial(self.process_rels, metadata),queryargs=queryargs)
D) process_rels traverses the response to extract the work which the workId is "part of" and updates the metadata
E) add_worktag updates the "bottom level" metadata.
All the individual bits seem to work OK, but not in this order. Having put various traces in, it is clear from the log that what happens is:
1) Picard loads and clusters the files
2) add_worktag executes and updates the metadata for each track (i.e. E above), but without the additional level from process_rels which does not execute at this stage.
3) Picard then moves the files to the right-hand pane.
4) Now process_rels executes, finds the parent work and updates the metadata object (i.e D above). BUT it has "missed the boat" because Picard has done with tagging and the metadata in Picard has not been updated for the new list items in the metadata object (which I can see clearly in the log).
So it seems that add_partof returns control to add_worktag which completes without process_rels having executed yet.
Maybe I'm being a bit dim and maybe there is a better explanation of the Picard API somewhere so I can work out where I'm going wrong?
Many thanks for any light shed.