I think it’s dangerous if we try to clean up random site URLs.
This is not a Facebook URL, this is a random site.
A random site could rely on arbitrary ?fbclid= named parameter to display the correct page.
I am not saying it’s the case here but we can only cleanup standardised known sites.
Ugh… yeah… 99.9% of the time “fbclid” stands for “Facebook tracking crap”, but I guess we should allow for the instance where some web designer decides to have “fbclid” stand for “Futurepop Band CD List ID” or something weird like that, that is necessary to that website.
Hmm… wonder what would happen if you had one of those hypothetical URLs where “fbclid” means something useful, and fed it through whatever fbclid-adding Facebook engine does this crap. Would Facebook just happily break all the URLs from this hypothetical website?
Although, if there’s a cleaning regimen that is applied to URLs regardless of type, I think the links editor should have a “don’t clean this URL” checkbox for those 0.1% corner cases where the thing that looks like a tracking parameter is actually essential to the URL.
Relatedly, does the new version of the external links editor still do that thing where if you edit an artist, it runs the cleanup on URLs relationships you didn’t modify?
without reading 102 posts to know what all was already said
The warning we get that says “this link is present in position number two” -
could we get a warning for when someone tries to add, as example:
artisthomepage dot com
artisthomepage dot com slash biography
artisthomepage dot com slash images
Maybe something like this domain is used in position number 2
This won’t stop people from intentionally abusing the link system. But could make honest users aware of the fact that we don’t need a link to each individual page.
Definitely, in the version that existed 6 months ago, it would run the checker on all the existing URLs. Most of the time there was no effect, since for a URL that had been passed through the checker when it was first submitted, the checker would not suggest further changes on later runs. However, I do remember when the URL checker/updater was updated from preferring http: for bandcamp to preferring https: for bandcamp, I found that when I was editing artists that had bandcamp URLs, the updater would add a http:->https: edit of the bandcamp URL even if I had not touched it. (there may have eventually been a mass-edit to convert all the remaining ones)
Similarly, I was able to bypass the javascript to create the following edits: https://musicbrainz.org/edit/56793286 https://musicbrainz.org/edit/61590936
but soon after, someone else edited the artist, creating a most-likely-unintentional side effect of cleaning up the non-MB-standard URL I had added.
Okay, it appears the new version of the links editor doesn’t mess with URLs that you didn’t touch when you edit other stuff about the artist.
As seen in
where the non-MB-standard discogs URL remains after I added the metal-archives URL.
With the old version, adding the metal-archives URL (or changing the name/type/disambig/etc) would cause the discogs URL to be re-checked.
I don’t think you need the clone script. Okay, the screenshots needed the script, but the issue is still there on a normal edit. Just with a normal edit there is a way of dealing with it.
Because the link I attempt to paste is technically still on the Release, but deleted in the GUI, I cannot paste it back in.
You can test on that live page above as no need to save anything. The error appears as you attempt to paste
-=-=-
Ah - I see a subtle difference here now I repeat it on a live page of a current Release. When I was working from the clone script I was therefore making a NEW Release. And that is how I got the second screen shot. I guess you did not expect to have a new release which already had links.
This is actually a pre-existing bug, older than the new code I have some code to fix it, IIRC, although I should check the status of it now that I’m back from my holidays.
I can find holes in anything. Thanks for letting us know it can be ignored.
Now I see it is only on the clone, it isn’t much of an issue. Also it is easy to get around as I just deleted all the links, saved my edits, then came back and added the new one. Nothing lost.
This is not the script. It is just visible with the script as it is creating a New Release with pre-attached URLs. On New Releases deleting URLs makes them disappear from the page. Deleting URLs on an old Release keeps them in view. Compare the screenshots.
As @reosarevok notes, this is just the different GUI of a new or old Release. Not something for the script.