Before doing so I expanded my VM disk size to 50 GB, just in case.
I then ran: sudo docker-compose run --rm musicbrainz createdb.sh -fetch
It spent about two hours downloading the db and setting it up but finally failed with an error:
I had another error popup saying I was out of disk space on a particular partition. I stupidly closed the dialog before getting full details, again tried to run:
sudo docker-compose run --rm musicbrainz createdb.sh -fetch
…to reproduce the error, but it starts downloading the 4GB again. It’s not practical for me to go through a 2 hour, 4GB download cycle for every troubleshooting iteration, so I cancelled the second operation.
Is there anyway for me to get a command which recognises that the 4GB has already been downloaded and resume from where it left off? Has the fact that I have already ran the command twice meant that the original file has already been overwritten?
By the way, I also tried running the command without the fetch:
sudo docker-compose run --rm musicbrainz createdb.sh
series_alias_type 2 100% 1748 0.00 sec
No data file found for ‘series_attribute’, skipping
No data file found for ‘series_attribute_type’, skipping
No data file found for ‘series_attribute_type_allowed_value’, skipping
Thu Jun 9 09:14:34 2022 : load series_gid_redirect
series_gid_redirect 128 100% 64321 0.00 sec
Thu Jun 9 09:14:34 2022 : load series_ordering_type
series_ordering_type 2 100% 1828 0.00 sec
Thu Jun 9 09:14:34 2022 : load series_type
series_type 13 100% 12487 0.00 sec
Thu Jun 9 09:14:34 2022 : load track
track 14127104 36% 398361Error loading /media/dbdump/tmp/MBImport-GlHqmEGB/mbdump/track: 08000 DBD::Pg::db pg_putcopydata failed: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request. at /musicbrainz-server/admin/MBImport.pl line 304, line 14127664.
08006 DBI connect(‘dbname=musicbrainz_db;host=db;port=5432’,‘musicbrainz’,…) failed: connection to server at “db” (172.18.0.3), port 5432 failed: FATAL: the database system is in recovery mode (in cleanup) �lM�U�5I�U at /root/perl5/lib/perl5/Throwable.pm line 76, line 14127664 during global destruction.
Failed to import dataset.
Thu Jun 9 09:15:10 2022 : InitDb.pl failed
The instructions works fine if you follow them point by point (I use them with every schema upgrade, starting from scratch). I use VMWare with totally 300GB hard disc space as suggested in the instructions (your 50GB are much too small, even without indexed search), install latest Ubuntu on it and then start with installing Docker as mentioned as “Required software”:
The script createdb.sh would fail because it is calling InitDb.pl with some other dump files too (see the list of downloaded files). It most likely explains the first failure: Your disk ran out of space, which interrupted the download, and InitDb.pl got called with broken files.
Downloaded files are stored in a docker volume which usually is on your filesystem:
Most virtual machines support some kind of “snapshots”. A snapshot preserves the state and data of a virtual machine at a specific point in time.
For example: You create a new VM, install Ubuntu with all updates and install docker as mentioned above.
Then you create a snapshot.
After the creation of a snapshot you continue with the instructions. If an error occurs, you can go back to your previously snapshot and repeat the steps since this point without re-creating the entire VM.
Of course you can create multiple snapshots at different points in time for your VM.
I’m afraid that’s not enough. AFAIK, the process will check the newest LATEST file online from your choosen source server and compare it to your manually downloaded files. I’m not sure if there is a way to use “old” local dump files anyway, if newer dump files are available online. Maybe @yvanzo can tell it for sure.
It is expected to abort when the timestamps differ, because loading dumps made at different time would cause the database to be broken in several ways (foreign key constraints, and replication packets).