AcoustID precise enough to auto-cut a recording?

Hello there,
I’ve been trying to work out a script that would automatically recognize songs in my old recordings and auto-cut them. However during my first trial run I have noticed that up until a point it would actually give me an increasingley better fingerprinting score if the segment actually was cut a little short in the beginning and bled into the next song at the end (up to a point).
This was obviously not a correct result for the given song and now I wonder, if this is a limitation of the fingerprinting system, as in it’s good enough to recognize a song but not good enough to really determine what is the closest approximization of the song or if I am simply working with a polluted fingerprinting dataset in that case?

I would say you’ve got a 90% chance of a good ID match, but there is also a lot of mess polluting the AcoustIDs. There are various projects cleaning things up.

I spend way too much time looking at AcoustID data, and that means spotting the mistakes and mess that is also in there. There are some sloppy users of Picard who will match on a name and then submit the AcoustID data without further checks.