Just sharing this from the blog:
I think this is pretty cool and one interesting step forward. Good work everyone involved!
Just sharing this from the blog:
I think this is pretty cool and one interesting step forward. Good work everyone involved!
5 posts were split to a new topic: AcousticID vs. AcousticBrainz vs. AcoustID
Is there any structured way to provide feedback about the evaluation of similarity results?
I think it’s a feature that may already have been implemented, but maybe not released on the live site currently. @alastairp should be able to provide more insight.
I started running some test using recordings in a Jazz music collection and comparing the results to the recommended playlist in Spotify, considering the recommendations as an index of similarity.
On the Spotify side, the recommendations are deeply influenced by the listening statistics and this can be easily evaluated by accessing the same playlist being logged in or not: on the MetaBrainz side, this is in the path of integration of AcousticBrainz with ListenBrainz. Moreover, the recommendations are also deeply based on the social feedback, and on the MetaBrainz side, this is in the path of integration of AcousticBrainz with CritiqueBrainz. Anyway, the overall result on the Spotify side is that all recommendations are for recordings of the same genre and timeframe.
On AcousticBrainz side, using only the characteristics of the signal, the results are completely different and extremely scattered according the different metrics, spanning from punk to pop. If we want to move into an evaluation phase, we should define standard test and ranking datasets to make feedback comparable, clarifying the target of each specific metric.
I would be glad to enroll for such activity.
You should get in touch with the people working on ListenBrainz and AcousticBrainz, they are working on recommendations based on data from the various MB projects. There is some information on
But this is being worked on in various places, e.g. ListenBrainz. Best would be probably be to get in touch via IRC in MetaBrainz on irc.libera.chat. @rob, @alastairp and @lucifer are the ones you probably want to ask about this
I’m trying to setup the playground in a Windows environment, but I’ve the following error:
(.ve) C:\Users\Pietro\Documents\GitHub\troi-recommendation-playground>python3 -m troi.cli --help
Traceback (most recent call last):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.2032.0_x64__qbz5n2kfra8p0\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.2032.0_x64__qbz5n2kfra8p0\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\Pietro\Documents\GitHub\troi-recommendation-playground\troi\cli.py", line 4, in <module>
import click
ModuleNotFoundError: No module named 'click'
even if “click” has been correctly installed with requirements.txt
(.ve) C:\Users\Pietro\Documents\GitHub\troi-recommendation-playground>python -m pip list
Package Version
-------------- ---------
atomicwrites 1.4.0
attrs 21.2.0
certifi 2021.5.30
chardet 3.0.4
click 7.1.2
colorama 0.4.4
Flask 1.1.2
Flask-Testing 0.8.0
idna 2.10
itsdangerous 2.0.1
Jinja2 3.0.1
MarkupSafe 2.0.1
more-itertools 8.10.0
packaging 21.0
pip 21.2.4
pluggy 0.13.1
py 1.10.0
pylistenbrainz 0.4.0
pyparsing 2.4.7
pytest 5.4.3
requests 2.24.0
setuptools 58.0.4
ujson 2.0.3
urllib3 1.25.11
wcwidth 0.2.5
Werkzeug 2.0.1
wheel 0.37.0
Is there anyone using this playground to double-check?
Probably Python version conflict. For the pip command you call python
, for running the app you use python3
You are right: sorry, but I’m new to Python.
(.ve) C:\Users\Pietro\Documents\GitHub\troi-recommendation-playground>python3 -m pip list
Package Version
--------------------------------- -------
backports.entry-points-selectable 1.1.0
distlib 0.3.2
filelock 3.0.12
pip 21.2.4
platformdirs 2.3.0
six 1.16.0
virtualenv 20.8.0
(.ve) C:\Users\Pietro\Documents\GitHub\troi-recommendation-playground>python -m pip list
Package Version
-------------- ---------
atomicwrites 1.4.0
attrs 21.2.0
certifi 2021.5.30
chardet 3.0.4
click 7.1.2
colorama 0.4.4
Flask 1.1.2
Flask-Testing 0.8.0
idna 2.10
itsdangerous 2.0.1
Jinja2 3.0.1
MarkupSafe 2.0.1
more-itertools 8.10.0
packaging 21.0
pip 21.2.4
pluggy 0.13.1
py 1.10.0
pylistenbrainz 0.4.0
pyparsing 2.4.7
pytest 5.4.3
requests 2.24.0
setuptools 58.0.4
ujson 2.0.3
urllib3 1.25.11
wcwidth 0.2.5
Werkzeug 2.0.1
wheel 0.37.0
So I used
python -m troi.cli --help
python -m troi.cli test
for an output without errors
(.ve) C:\Users\Pietro\Documents\GitHub\troi-recommendation-playground>python -m troi.cli --help
Usage: cli.py [OPTIONS] COMMAND [ARGS]...
Options:
--help Show this message and exit.
Commands:
info Get info for a given patch
list List all available patches
playlist Generate a playlist using a patch PRINT: This option causes...
test Run unit tests
(.ve) C:\Users\Pietro\Documents\GitHub\troi-recommendation-playground>python -m troi.cli test
======================================= test session starts =======================================
platform win32 -- Python 3.9.7, pytest-5.4.3, py-1.10.0, pluggy-0.13.1
rootdir: C:\Users\Pietro\Documents\GitHub\troi-recommendation-playground
collected 36 items
troi\acousticbrainz\tests\test_annoy.py .. [ 5%]
troi\listenbrainz\tests\test_area_random_recordings.py . [ 8%]
troi\listenbrainz\tests\test_recs.py . [ 11%]
troi\listenbrainz\tests\test_stats.py ... [ 19%]
troi\musicbrainz\tests\test_ac_id_lookup.py . [ 22%]
troi\musicbrainz\tests\test_genre_lookup.py . [ 25%]
troi\musicbrainz\tests\test_mbid_mapping.py .. [ 30%]
troi\musicbrainz\tests\test_recording_lookup.py . [ 33%]
troi\musicbrainz\tests\test_related_artist_credits.py . [ 36%]
troi\musicbrainz\tests\test_year_lookup.py ... [ 44%]
troi\tests\test_entities.py .... [ 55%]
troi\tests\test_filters.py ....... [ 75%]
troi\tests\test_operations.py ...... [ 91%]
troi\tests\test_sorts.py . [ 94%]
troi\tests\test_utils.py . [ 97%]
troi\tools\tests\test_area_lookup.py . [100%]
======================================= 36 passed in 2.71s ========================================
The README.md should be reviewed.
Is there any description on how to use this tool?
Hi @PierPiero
Thanks for you interest in using troi! As you’ve discovered it’s not really at a level for use by everyone yet, and our focus has been on other things for the last few months.
You got close with your output examples (python -m troi.cli --help
).
The next command you need is
python3 -m troi.cli list
Available patches:
area-random-recordings: Generate a list of random recordings from a given area.
daily-jams: Generate a daily playlist from the ListenBrainz recommended recordings. Day 1 = Monday, Day 2 = Tuesday ...
weekly-flashback-jams: Generate weekly flashback playlists from the ListenBrainz recommended recordings.
ab-similar-recordings: Find acoustically similar recordings from AcousticBrainz
This gives you the list of possible patches that have been developed until now. For acoustic similarity, we have the ab-similar-recordings
patch. You can use the playlist
command to generate a list of tracks with this patch
python -m troi.cli playlist --print ab-similar-recordings 968f9646-42d9-4404-9aff-e9d79c461ee0 mfccs
I’ve updated the README in the repository to include some further hints, and have also applied your recommendation to use python
instead of python3
, thanks for the report.
As the documentation says, this is mostly targeted towards developers at the moment, but we also have plans for end users to be able to use this more easily. Keep an eye out for further development.
Hi @alastairp,
many thanks for the time you spent on my requests and for your insights: the results of some tests I did are amazing.
In case you need a beta tester or someone to do demos or documentation for absolute beginners, I’m a candidate