Recordings, offsets, Acousticbrainz... help?

I’m looking to use AcousticBrainz to do some lighting synchronization using the beat timing info available in the low-level API. So far it looks very promising and almost works for my application.

My difficulty is that I need to get AcoustID from an audio sample that may be slightly offset from the recording the that the MusicBrainz MBID is based on. This is almost always just 10ms - 2000ms of silence at the beginning of my version of the recording; the actual recording is identical.

Can anyone think of a way to calculate the offset I will need to apply to get the AcousticBrainz data to line up? The only approach I’ve come up so far is to use the Essentia toolkit and reprocess my sample file over and over, trimming a tiny bit off the front each time until the metadata aligns. That seems slow and wasteful.

Is there a better way to get to “audio file X is the same as recording MBID YYYY, if you trim off Z samples from the file”?

1 Like

Your question might relate to something with wide application.

While editors trying to merge MB Recordings don’t need Z, having “Recording MBID XXXX is the same as Recording MBID YYYY, if you trim off Z samples from the start of the file” would be very useful I think.

Hmm. I was hoping for “yes, there’s an easy and known way” :slight_smile:

It seems like the AcousticBrainz rhythm/beats_position data might be a good way to calculate the offset once it’s determined that either two recordings are the same, or that an arbitrary audio file is the same as a recording except for an offset.

And I suppose the offset should be in seconds rather than samples since recordings aren’t expected to be the same sample rate.

I’ll tinker and see what I can come up with.

1 Like