Replication error: "current row … contains a different value … than the replication packet suggests it should have"

Hello, folks:
I have been running a replica MusicBrainz-Docker instance and database since early October. Replication worked fine for a few weeks. Today I noticed the following error messages in my log. Something appears to have gone wrong with the replication on Oct 31 or Nov 1 UTC.

Here is what I see in /musicbrainz-server/mirror.log in my musicbrainz-docker-musicbrainz1 instance.

Sat Nov  1 03:00:03 2025 : Downloading https://metabrainz.org/api/musicbrainz/replication-181059-v2.tar.bz2 to /tmp/replication-181059-v2.tar.bz2
Sat Nov  1 03:00:04 2025 : Decompressing /tmp/replication-181059-v2.tar.bz2 to /tmp/loadrep-5eH7uV
TIMESTAMP
COPYING
README
REPLICATION_SEQUENCE
SCHEMA_SEQUENCE
DBMIRROR_VERSION
mbdump/pending_data
mbdump/pending_keys
mbdump/pending_ts
Sat Nov  1 03:00:04 2025 : This packet was produced (or begins) at 2025-10-29 01:13:27.182902+00
Sat Nov  1 03:00:04 2025 : Importing the pending data files
Sat Nov  1 03:00:05 2025 : starting import
Table                               Rows est%  rows/sec
Sat Nov  1 03:00:05 2025 : load dbmirror2.pending_data
dbmirror2.pending_data             41100 100%    128922 0.32 sec
Sat Nov  1 03:00:05 2025 : load dbmirror2.pending_keys
dbmirror2.pending_keys                87 100%     57615 0.00 sec
Sat Nov  1 03:00:05 2025 : load dbmirror2.pending_ts
dbmirror2.pending_ts                1367 100%    226963 0.01 sec
Sat Nov  1 03:00:05 2025 : import finished
Loaded 3 tables (42554 rows) in 0 seconds
Sat Nov  1 03:00:05 2025 : Removing /tmp/loadrep-5eH7uV
Sat Nov  1 03:00:06 2025 : Processing replication changes
     XIDs     Stmts est%  XIDs/sec  Stmt/sec
        0         0   0%         0         0The current row in musicbrainz.artist_credit with key id='898150' contains a different value in column ref_count (2106) than the replication packet suggests it should have as the old value (2146). at /musicbrainz-server/admin/replication/ProcessReplicationChanges line 497.
The current row in musicbrainz.release_group_meta with key id='2477750' contains a different value in column release_count (7) than the replication packet suggests it should have as the old value (6). at /musicbrainz-server/admin/replication/ProcessReplicationChanges line 497.
Use of uninitialized value $params[0] in join or string at /musicbrainz-server/admin/replication/../../lib/Sql.pm line 129.
Use of uninitialized value $params[4] in join or string at /musicbrainz-server/admin/replication/../../lib/Sql.pm line 129.
Failed query:
        'INSERT INTO musicbrainz.release_meta (amazon_asin, cover_art_presence, date_added, id, info_url) VALUES (?, ?, ?, ?, ?)'
        ( absent 2025-10-29 00:19:59.93167+00 5301234 )
23505 DBD::Pg::st execute failed: ERROR:  duplicate key value violates unique constraint "release_meta_pkey"
DETAIL:  Key (id)=(5301234) already exists. [for Statement "INSERT INTO musicbrainz.release_meta (amazon_asin, cover_art_presence, date_added, id, info_url) VALUES (?, ?, ?, ?, ?)" with ParamValues: 1=undef, 2='absent', 3='2025-10-29 00:19:59.93167+00', 4='5301234', 5=undef]
 at /musicbrainz-server/admin/replication/../../lib/Sql.pm line 129.
        Sql::catch {...} (MusicBrainz::Server::Exceptions::DatabaseError=HASH(0x555557d00688)) called at /musicbrainz-server/local/lib/perl5/Try/Tiny.pm line 123
        Try::Tiny::try(CODE(0x555557d050c8), Try::Tiny::Catch=REF(0x555557d00d70)) called at /musicbrainz-server/admin/replication/../../lib/Sql.pm line 130
        Sql::do(Sql=HASH(0x555557816368), "INSERT INTO musicbrainz.release_meta (amazon_asin, cover_art_"..., undef, "absent", "2025-10-29 00:19:59.93167+00", 5301234, undef) called at /musicbrainz-server/admin/replication/ProcessReplicationChanges line 530
        main::dbmirror2_insert(Sql=HASH(0x555557816368), "musicbrainz.release_meta", HASH(0x555557b97fd0)) called at /musicbrainz-server/admin/replication/ProcessReplicationChanges line 339
        main::dbmirror2_command(Sql=HASH(0x555557816368), ARRAY(0x555557730078)) called at /musicbrainz-server/admin/replication/ProcessReplicationChanges line 188

That ends the day’s logs with the initial error. Here are the first few lines of the following day’s log:

Sun Nov  2 03:00:02 2025 : Continuing a previously aborted load
Sun Nov  2 03:00:02 2025 : Processing replication changes
     XIDs     Stmts est%  XIDs/sec  Stmt/sec
        0         0   0%         0         0The current row in musicbrainz.artist_credit with key id='898150' contains a different value in column ref_count (2106) than the replication packet suggests it should have as the old value (2146). at /musicbrainz-server/admin/replication/ProcessReplicationChanges line 497.

I cut off the log after the first few lines of Nov 2 UTC. Every subsequent day appears to begin with “Continuing a previously aborted load” and repeats the error message about “contains a different value … than the replication packet suggests it should”.

The time stamps appear to be UTC.

I do not know where the mirror.log contents from before Nov 1 UTC are stored. I believe I saw that the musicbrainz-server instance has a log rotation script, but I don’t don’t know where the old files are stored. I don’t see them in /musicbrainz-server/*.

Sometime, probably on Oct 31 04:00 UTC or after, I ingested the Oct 30 mbdump.edit.tar.bz2 and mbdump.editor.tar.bz2. So there is correlation with that ingestion and the replication failing. I don’t have evidence that the one caused the other.

Any suggestions for how to get replication working again? I could perhaps blow away my database and ingest a fresh set of data, but I am hoping for something less drastic.

I found the log rotation directive I recalled seeing. It is in /musicbrainz-server/admin/cron/mirror.sh. The relevant lines seem to be:

X=${LOGROTATE:=/usr/sbin/logrotate --state $MB_SERVER_ROOT/.logrotate-state}

… and …

$LOGROTATE /dev/stdin <<EOF
$MIRROR_LOG {
    daily
    rotate 30
}
EOF

According to the logrotate(8) man page, as I read it, this should accumulate each day’s mirror.log as files named /musicbrainz-server/mirror.log.1, …mirror.log.2, … etc. through mirror.log.30. However, I don’t see those files in my musicbrainz-server instance. Instead, mirror.log appears to accumulate each day’s log entries, then gets truncated on the first of the month. I don’t understand why this happens. I also don’t think it’s a likely cause of the replication error.

I think I understand why these logs are not rotating. I filed MBVM-103
mirror.log logs do not rotate, in musicbrainz-server (docker)
to track that bug. I don’t think it is related to my replication error message.

Sometime, probably on Oct 31 04:00 UTC or after, I ingested the Oct 30 mbdump.edit.tar.bz2 and mbdump.editor.tar.bz2. So there is correlation with that ingestion and the replication failing. I don’t have evidence that the one caused the other.

That would be the issue, since MBImport.pl updates the replication_control table by default. So importing those would have rolled back the current_replication_sequence in that table. Now it’s trying to apply packets that have already been applied.

You might be able to find the point in your mirror.log where the replication sequence being downloaded jumps backwards rather than forward, and update the replication_control.current_replication_sequence column with the real sequence your database is at.

But the most straightforward option is dropping and recreating the database from scratch, making sure to pass --noupdate-replication-control to MBImport.pl when importing the edit/editor dumps next time.

If you can’t locate the correct sequence in mirror.log (edit: I missed the part where you said you’re missing contents from before Nov 1 UTC, so this sounds likely) and don’t want to re-import everything, but also don’t mind hacking a Perl file, a potential fast workaround may be to edit admin/replication/ProcessReplicationChanges inside the container, and initialize $ignore_conflicts to 1. That will just cause it to ignore any conflicting statements until you’re able to catch up to the correct packet again.

1 Like

By the way, this advice implies that maybe clarifying the usage help in MBimport.pl might be useful. Right now the usage help says,

--update-replication-control
whether or not this import should alter the replication control table. This flag is internal and is only be set by MusicBrainz scripts

First, if this option is useful when adding additional tables to a replica server, then maybe “This flag is internal and is only be set by MusicBrainz scripts” is a bit too strict. Second, I don’t see anything in the usage instructions which says that you can prefix “no” to options. I realise that --nofoo is a common usage, but it is not universal. Mentioning that it applies here would be clearer than not mentioning it.