MusicBrainz server troubleshooting help

Hello – I have a local musicbrainz server instance setup via the main docker container and I’m trying to query the server with Beets (file renaming and metadata mgmt tool). The idea is to be able to perform more than 1 query per second with a local instance

https://beets.readthedocs.io/en/latest/reference/config.html#musicbrainz-options

The trouble I’m having is that “search by index” does not return any results … this is the case for both web browser (localhost:5000) and Beet software.

I can see the request in the musicbrainz-search docker logs and have looked at the logs for the musicbrainz server but don’t see anything out of the ordinary there.

musicbrainz-search
2021-07-15 00:01:44.521 INFO (qtp898557489-139) [ x:artist] o.a.s.c.S.Request [artist] webapp=/solr path=/select params={q=miles+davis&start=0&rows=25&wt=mbjson} hits=0 status=0 QTime=11

I tried asking this question in the IRC chat but someone there told me to post here instead.

If I could get some help or tips that would be great
Thanks

2 Likes

Running into the same issue over here… Did you find any solution on how to get the search working with the docker version?

Finished with school and am back trying to figure this out … here’s a summary of what I know so far:

I followed the instructions for the musicbrainz docker here, and went the route of downloading the index files but I think I’m missing a step to initialize things

I tried running the utility below and noticed that indexer was not running but it didn’t make a difference in fixing the problem – from the localhost web instance the search-by-indexes still doesn’t return anything

Bobcat (master) admin$ sudo ./check-search-indexes all
check-search-indexes: cannot count indexed documents:  the Docker Compose service 'indexer' is not up
Try 'docker-compose up -d' from '/home/stah0121/proj/music_tests/musicbrainz_srv/musicbrainz-docker'
Bobcat (master) musicbrainz-docker$ sudo docker-compose up -d
musicbrainz-docker_mq_1 is up-to-date
musicbrainz-docker_redis_1 is up-to-date
musicbrainz-docker_search_1 is up-to-date
musicbrainz-docker_db_1 is up-to-date
musicbrainz-docker_musicbrainz_1 is up-to-date
Starting musicbrainz-docker_indexer_1 ... done
Bobcat (master) musicbrainz-docker$ 

After looking at the script and the output below I’m wondering if all of the “cores” should all be reporting OK ? but I guess I don’t fully understand the comparison between “indexed docs vs. existing docs”

Would really appreciate some help on this issue … I’ll keep poking around and will update here if I find anything

Bobcat (master) admin$ sudo ./check-search-indexes all
CORE           STATUS  INDEX  DB
editor         OK      0      /0
instrument     --      0      /1007
series         --      0      /13596
place          --      0      /47851
event          --      0      /52191
tag            --      0      /109442
area           --      0      /118536
label          --      0      /204813
cdstub         --      0      /288965
annotation     --      0      /431335
work           --      0      /1521752
artist         --      0      /1844598
release-group  --      0      /2300865
release        --      0      /2922841
url            --      0      /7896372
recording      --      0      /25081689

So I ran the script to delete all index files [delete-search-indexes all] and wanted to try to build the index files from scratch to see if that would work, but it didn’t. Before trying to generate the index files I changed the memory settings at the link referenced by the docker page and set the memory to both 4G and 16G but in both cases the same error got hit.

Bobcat (master) admin$ sudo docker-compose exec indexer python -m sir reindex
[sudo] password for stah0121: 
2021-12-02 02:53:03,874: Checking whether the versions of the Solr cores are supported
2021-12-02 02:53:03,970: Importing annotation...
2021-12-02 02:56:47,294: Successfully imported annotation!
2021-12-02 02:56:49,806: Importing area...
2021-12-02 02:57:37,639: Successfully imported area!
2021-12-02 02:57:37,852: Importing artist...
2021-12-02 03:12:09,326: Successfully imported artist!
2021-12-02 03:12:09,553: Importing cdstub...
2021-12-02 03:15:11,058: Successfully imported cdstub!
2021-12-02 03:15:11,206: Importing editor...
2021-12-02 03:15:11,305: Successfully imported editor!
2021-12-02 03:15:11,387: Importing event...
2021-12-02 03:16:48,234: Successfully imported event!
2021-12-02 03:16:48,550: Importing instrument...
2021-12-02 03:16:53,114: Successfully imported instrument!
2021-12-02 03:16:53,614: Importing label...
2021-12-02 03:19:07,820: Successfully imported label!
2021-12-02 03:19:08,008: Importing place...
2021-12-02 03:20:04,947: Successfully imported place!
2021-12-02 03:20:05,128: Importing recording...
2021-12-02 03:23:22,293: Failed to import recording with id 156221
2021-12-02 03:23:22,298: (psycopg2.OperationalError) server closed the connection unexpectedly
	This probably means the server terminated abnormally
	before or while processing the request.
 [SQL: 'SELECT musicbrainz.recording_first_release_date.recording AS musicbrainz_recording_first_release_date_recording, musicbrainz.recording_first_release_date.year AS musicbrainz_recording_first_release_date_year, musicbrainz.recording_first_release_date.month AS musicbrainz_recording_first_release_date_month, musicbrainz.recording_first_release_date.day AS musicbrainz_recording_first_release_date_day \nFROM musicbrainz.recording_first_release_date \nWHERE musicbrainz.recording_first_release_date.recording = %(param_1)s'] [parameters: {'param_1': 156221}]
Traceback (most recent call last):
  File "sir/indexing.py", line 262, in _query_database
    data_queue.put(row_converter(row))
  File "sir/schema/searchentities.py", line 265, in query_result_to_dict
    data["_store"] = tostring(self.compatconverter(obj).to_etree())
  File "sir/wscompat/convert.py", line 1029, in convert_recording
    if obj.first_release is not None and obj.first_release.date is not None:
  File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/attributes.py", line 237, in __get__
    return self.impl.get(instance_state(instance), dict_)
  File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/attributes.py", line 583, in get
    value = self.callable_(state, passive)
  File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/strategies.py", line 544, in _load_for_state
    return self._emit_lazyload(session, state, ident_key, passive)
  File "<string>", line 1, in <lambda>
  File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/strategies.py", line 588, in _emit_lazyload
    return loading.load_on_ident(q, ident_key)
  File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py", line 219, in load_on_ident
    return q.one()
  File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2759, in one
    ret = list(self)
  File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2802, in __iter__
    return self._execute_and_instances(context)
  File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2817, in _execute_and_instances
    result = conn.execute(querycontext.statement, self._params)
  File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 914, in execute
    return meth(self, multiparams, params)
  File "/usr/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 323, in _execute_on_connection
    return connection._execute_clauseelement(self, multiparams, params)
  File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1010, in _execute_clauseelement
    compiled_sql, distilled_params
  File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1146, in _execute_context
    context)
  File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1341, in _handle_dbapi_exception
    exc_info
  File "/usr/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context
    context)
  File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 450, in do_execute
    cursor.execute(statement, parameters)
.
.
.
    OperationalError: (psycopg2.OperationalError) FATAL:  the database system is in recovery mode

    2021-12-02 04:38:56,484: Failed to import recording with id in bounds (220891, 234738)
    2021-12-02 04:38:56,484: (psycopg2.OperationalError) server closed the connection unexpectedly

For anyone still following this – I managed to figure it out

After what looked like a failed attempt to manually build the search indexes, it looks like it made it far enough for some searching to succeed.

Bobcat (master) admin$ sudo ./check-search-indexes all
CORE           STATUS  INDEX    DB
editor         OK      0        /0
instrument     OK      1007     /1007
series         --      0        /13596
place          OK      47851    /47851
event          OK      52191    /52191
tag            --      0        /109442
area           OK      118536   /118536
label          OK      204813   /204813
cdstub         OK      288965   /288965
annotation     OK      431335   /431335
work           --      0        /1521752
artist         OK      1844598  /1844598
release-group  --      0        /2300865
release        --      0        /2922841
url            --      0        /7896372
recording      --      111710   /25081689

I’m guessing searching by cores with status ‘–’ will give you unreliable results because not all the index files are included. Perhaps the search by indexes is still a bit experimental, or I have an outdated version of something.

1 Like

Since I use this docker version and download the dumps (data & indexes) I have never seen that all cores report the status “OK”. There are always some (minor) differences.

BUT:
As long as you see a zero (0) at the index columns, you don’t have a single entry for this index.
In your example you have no index for series, for tags, for works, for release-groups, for releases and for urls.

Maybe you should check if your dump source server has actually all this indexes.

You can find the various dump server here:
https://musicbrainz.org/doc/MusicBrainz_Database/Download
For example, if you manually have a look inside the “current” dump directory at
http://ftp.musicbrainz.org/pub/musicbrainz/data/search-indexes/20211201-114001/
you can see, that all your missing index dump files are there.

If you download and import this indexes, there should be NO errors like the one you reported 1 day ago. Unfortunately, I can’t tell you what exactly is wrong. If you don’t find the reason, I would suggest to start over and install the docker image again from scratch.

2 Likes

What you can try instead of using the above command (at least for the first execution after the initial installation):

sudo docker-compose run --rm musicbrainz fetch-dump.sh search
sudo docker-compose run --rm search load-search-indexes.sh

This will download the latest data dump AND the matching pre-built search indexes based on the latest data dump. (Your used command build the indexes manually - and locally - from the installed database and existing data.)