When I (recently) discovered the Covert Art Archive, my first question was why this was not part of musicbrainz entirely. Google and TinEye are a great way for a human to search by images, but when you try to automatically match a given page to an album, let's say a review, you have three main ways to do that:
- The page references external databases such as musicbrainz. Thus the matching is already done by the writer.
- Name Entity Recognition: with Natural Language Processing techniques, you find which album is concerned and match it later with musicbrainz
- Image Matching: if the page displays the album cover (which we hope), searching by the image (its fingerprint) in a database such as the Covert Art Archive will match it to all the corresponding metadata in musicbrainz
In my experience, the third way is reliable and easy to implement (easier than 2). The prerequisite is to have the cover in the page.
In a more conceptual way, one can legitimately consider the fingerprint as an image metadata (like it is done for audio), without questioning the use of it, and without implementing a service to search by image (third applications can implement it and use the fingerprint for a local comparison).
I still find the idea interesting and I am eager to contribute on this if I'm not the only one who thinks it's interesting!
Thanks for your feedbacks.