First of all, thank you for this amazing script!
It makes my life a lot easier!
Just tested the import button for vgmdb and it seems to fetch only certain image types
Tried with this release Majo no Takkyuubin Image Album | TKCA-71030 - VGMdb and the images for types obi and insert weren’t fetched, had to manually add them.
@chaban mentioned previously that you need to log in to see all images, but I was under the impression that the 3rd-party vgmdb.info API provided all the covers. Evidently that isn’t the case. Unfortunately, as mentioned previously, I have no intention to support anything requiring authentication so those images won’t be able to be grabbed unless you manually enter the direct links.
I’m wondering why these images require an account to access, though, so if anyone has any info (or possibly even a workaround), please share. If we could somehow detect when images are missing, we could warn about it though. It appears that this table displays the correct number of covers and doesn’t require an account to access. However, extracting that information in an automated way would be fairly inefficient, as there doesn’t seem to be a way to get that information for a specific album in a single request, we’d need to traverse that whole list. It’s possible to run a binary search but that would still require around 6 network requests for releases starting with M and a page size of 100. Admittedly that’s much better than 251 requests in the worst case with a simple linear scan, but it’s still quite a lot of requests…
I’ve created #62 to track, but I don’t expect this to get fixed in the near future. I’ll try to get in touch with the people behind the 3rd-party API and see if we can do anything about it.
Yeah, it seems that getting all the covers without account support won’t be possible at the moment.
I do wonder why they choose to lock stuff like Obi. I do understand why they would lock R18 cover art tho.
It seems that the usual cover art types available without account support are: Front, Back, Booklet and Medium. Anything extra seems to be locked, but the images themselves can still be reached without an account, since copying the url to upload them works and I can also open the url in an incognito tab without logging in.
I like the idea of warning about “incomplete” uploads too, but I understand it could be tiresome to fix in the near future.
Since the direct links to images themselves can be accessed without an account, a possible option would be to run a userscript (either the main script itself or a smaller standalone one) on VGMdb and somehow extract the links there. If the user is logged in, that script will be able to extract all images. The links could then either be put inside of a text box and copied to the clipboard (requires #53 to be addressed) or we could add a button to “seed” them directly to the addition page, like what’s being done on a-tisket currently. The latter could make it easier to automatically fill types and comments, but would require 1) being able to seed multiple links (currently not supported, but should be viable) and 2) link VGMdb albums to MB releases (which should be possible).
I don’t know if there’s an userscript like that available, but, until now I used a small python script that downloaded all the images and then I just uploaded them to the CAA using the regular uploader option. It did require login tho.
It might be possible to tweak it to extract all the URLs and output them to text
Apologies for the radio silence over the past couple of weeks, I’d mostly been working behind the scenes to make the userscripts and the development process more reliable, and most of that is now done. In the meantime I’ve released a couple of fixes to various userscripts:
CAA Dimensions will properly handle queued PDF uploads on add-cover-art pages (thanks chaban!)
Collapse Work Attributes should now run on other pages of artist works on all browsers and userscript engines. Previously it worked on ViolentMonkey, but not on TamperMonkey. (thanks Tiske Tisja).
The incompatibility between Paste Multiple External Links and louijin’s Wikipedia/Wikidata/VIAF/ISNI script has been fixed on loujin’s side.
Enhanced Cover Art Uploads:
open.qobuz.com links are now supported.
Qobuz goodies are extracted when available.
Improved extraction of Tidal covers
Optimised extraction of Discogs covers
We’ve changed the way we’re seeding covers through URLs (e.g. on a-tisket), which opens up possibilities to address the VGMdb issue of covers hidden behind a login.
URL redirects are handled more safely.
Redirects in all providers are checked to ensure that we get redirected to the same release as the original URL (think iTunes to Apple Music URLs), if not, the extraction is aborted with an error. You can then choose to use the target URL directly.
Redirects in direct image links are still allowed, but will emit a warning and will be indicated in the edit note.
Known issues:
Those warnings get overwritten in the status banner quite quickly.
The redirect error message for providers is very long and leads to bad placement of page elements.
@kellnerd is also currently working on fixing the Amazon provider (#86), which is currently missing some images if there’s more than 4, and also isn’t extracting the highest possible resolution. Those fixes should get automatically released once we merge that PR (like this), thanks to the behind-the-scenes changes. So you can expect plenty more regular updates without waiting on me to decide when a new version should be released
Edited to add: I could use some additional input on this Apple Music PNG/JPEG issue, if anyone knows something about image compression etc.
I’ve tried to install in both Chrome and Brave, but both times, I get an “Invalid Script Header” error am I doing something wrong? I do have it in developer mode
@DemonKingOdio Make sure you’re installing it through a userscript engine like Tampermonkey or Violentmonkey and not as a native Chrome userscript. Native Chrome userscripts are severely limited in functionality.
Add warnings from “Supercharged cover art uploads edits” such as releases in the future or unusual aspect ratio for packaging/format
Bandcamp provider:
Indicate lack of cover instead of an error: Failed to grab images Error: Could not find required element, ideally marking as such once possible: MBS-5450
Rationale: Some providers such as iTunes/Apple Music return HTTP 200 for no longer available releases. Would be nice if the script could detect this too.
Bandcamp includes a div with id missing-tralbum-art:
Now that track images are supported there is a problem with duplicate images. Some releases have the exact same image added to multiple tracks, sometimes all of them:
Quick reply, I’ll look into it in more detail in the near future.
I thought I had fixed those “r: HTTP error” problems, will look into it again. Will look into the Deezer problem with missing status text too. Update: Fixed
Add warnings from “Supercharged cover art uploads edits” such as releases in the future or unusual aspect ratio for packaging/format
In the edit note or on the page itself? Could be useful, but would be much easier to implement if Supercharged were ported, which it isn’t yet. I’ll see what I can do in the meantime.
About the duplicate track images: We were already aware of this but didn’t think it’d be that common (that’s a pretty big list, given that this change has only been out for 2 days). We actually are deduplicating the track images, but we’re basing that on the URL. In your first example, although all track images are identical (and identical to the front cover too), each of them has a different URL. I ran into the same issue with Soundcloud (not released yet), where we’ll deduplicate based on thumbnail data. I’ll investigate for Bandcamp too. What I can already say is that the case where the resolution and AR differs, that there’s nothing that can be done about it, other than running the image diffing algorithm from Supercharged on all of the track images, which isn’t really feasible to do for every image (very resource-intensive). #13 should make it easier to compare those cases, but that also requires the Supercharged port.
Image type and index in the filename is intentional: The index is used to uniquely identify the image when filling in types and comments; the extension is there because some providers may use the “png” suffix while serving JPEGs etc. We could clean that up a bit, though.
Update: The messaging w.r.t. missing Bandcamp covers and deleted Apple Music releases has been improved.
As for whether the script has an impact on cover art additions: it think it does. I’ve done some DB queries for the edit note content, I found 14956 edit notes containing “Enhanced Cover Art Uploads”. Snapshot from yesterday, so it’s likely over 15k by now. On average, that’s about 483 covers added by the script per day. For reference, scanning for “Upload to CAA from URL” gives ~16800 results, since 2021-06-02. Assuming everyone switched to Enhanced when it came out, that’s only 150/day on average.
Since I’m here anyway, short summary of the most important changes in the past week:
The previously-mentioned Amazon issue has been fixed (thanks @kellnerd!)
Seeding from a-tisket works on Tampermonkey again
We’re now grabbing Apple Music JPEG version if the source format was JPEG
Bandcamp track images and square thumbnails are grabbed when applicable
You can now paste multiple URLs separated by whitespace, and the issue with URL decoding has been addressed
Added QUB Musique provider (basically the same as Qobuz)
We’ve updated the location of the “Supported providers” document, so if you have that bookmarked, please update. The old URL should continue to work for a while, but at some point it’ll probably get binned.
Various BTS changes and improvements.
“Select All Update Recordings” is now enabled on “add release” pages too (thanks @jesus2099!)
Coming soon Now here:
Support for Soundcloud (+ track images, deduplicated where possible)
Support for Beatport (beware of upscaled images, they’re common on old releases)
If you’ve got other providers that you’d like to see added, let me know!
No hurry. I’d probably put it somewhere easily visible. Edit notes are usually the last place I’d look, although sometimes when I intent a more complex edit, reread it and then think “what the fuck am I doing?”
When importing images sometimes the same image is queued twice I guess it’s due to mouse problems. Got a clue, might actually be related to CTRL+ Click (I open multiple releases in advance and prepare images as I CTRL+Tab through them)
Edit note text doesn’t get duplicated though. Usually I remove it from the form but at least one time I missed it:
Trivia:
Even before the script supported Qobuz I noticed that for a few releases the API returns 404 even though the release is still visible in the shop at least in the linked locale.
Another fresh example from the forum
Bandcamp track images are sometimes used unorthodoxically:
Thanks again for all of those reports! I don’t think we can do much about the Bandcamp track image issue except for “Fetch front image only” and some better previewing utilities on the cover art upload page (which will get added eventually, promised!), but I’ll see what I can do about the rest.
Still loooooooooooooving this script! Thought I’d reiterate my request from earlier, but this time with two possible ideas. This relates to removing the pointless click + wait for ‘this release has no cover art’ page to load, then click again to add cover.
When album has no cover, make ‘cover art’ tab go straight to 'add cover art page:
(yes, MB inline script does this but I’m not using it ATM)
Put the import links on the ‘Cover Art’ page, so the process can be started from there directly:
I’m not sure what to do with these, because it’s sort of hitting the point of diminishing returns. As you said, there are a lot of possible marketplaces that could be added, and each one of them takes time upfront to implement, in addition to time spent maintaining them when their websites change. As much as I’d like to implement support for every possible website, it’s probably better to limit the scope somehow to avoid ending up with an unmaintainable or unmaintained mess, especially since we can input image URLs directly. I’m not quite sure what the cutoff would be, though.
I’ve also been thinking about a sort of “generic” provider that would try to extract the image(s) from any webpage using some heuristics. But as I commented in that thread, I’m not sure whether that’s a good idea either.