ListenBrainz Radio

Yes, the UI is terrible – but I also knew that people were would overcome that. We’ll build at least two more interfaces for this to build out the feature set and make it easier to use.

As for similar artist relationships – the data is just on the edge of usable. There are literally times when we’ve been able to see our own tastes in the data, which clearly shows that we could use more user submitting more listens and that should improve this data over time.

Especially on hard mode – some of that data could get squirrely. :slight_smile:


So, did anyone try it? How did it go? I’ve been using it for some time now and have found several new albums and even music styles, so I’m pretty happy about it. :slight_smile:

That I only can aye! :slightly_smiling_face: It is an interesting tool with a lot of -in a good way- surprises.

Great – share some of the playlist you generated that you found interesting, please! I would love to see what others are coming up with.


And I just added a new feature to LB Radio: User statistics as an entity. So if you’d like to get a random smattering of tracks that the LB team listens to, then enter this prompt:

user:lucifer:1:this_year user:mr_monkey:1:this_year user:rob:1:this_year

The option for each entity must specify one of the following time_ranges: week, month, quarter, half_yearly, year, all_time, this_week, this_month, this_year.

We think this could make a for a nice playlist created from all of your similar users… What can you think of creating with this?

Lets see some interesting playlists!

1 Like

just did a bit of experimenting with this, and it seems to be giving pretty decent results, and hopefully could be made better once tagging recordings with more than just genres is more widespread~

I don’t immediately have any ideas on the new user: entity, though I wonder if it would be possible to implement a shorthand for user:[self] for your own listens?

I also attempted a “Salt and Pepper Diner” experiment, trying to recreate the experience from that one John Mulaney bit (which I don’t know if that’s even possible with this tool, but figured it’d be a fun test, lol), but had an issue… the JSON POST and error message

        "mode": "easy",
        "prompt": "tag:(salt and pepper diner):20 tag:(covered by glee):1"
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/datasethoster/", line 191, in web_query_handler
    data = query.fetch(arg_list)
  File "/hoster/./", line 47, in fetch
    self.playlist_name = playlist.playlists[0].name
AttributeError: 'NoneType' object has no attribute 'playlists'

I did just add the tag I queried first, so that may be part of the issue? not sure… for the second tag, I just picked a random tag that It’s Not Unusual had, lol

edit: realized I had the wrong number of plays, tried again, and got the same error message

a few further questions…

I read that each part of the prompt gets split into “streams”, do these streams get shuffled in the resulting playlist?

and for the time_ranges, is the difference between week and this_week simply the previous 7 days and since last Sunday respectively? if so, that might be worth noting in the docs

1 Like

Thanks for pointing out this issue – fixed now.

Yes. That is covered in the docs – can you please read the docs again and see how we can improve this to make it more clear?

Good idea, let me do that.

I’ll see about the [self] bit – I’d prefer to simply do: “user:” and have that imply ‘me’. But I can’t implement that yet, since where this is hosted doesn’t know about who the current user is. Once we move this to a better UI on then it will know users, be able to filter out hated tracks and filter out tracks you recently listened to.

Thanks for the feedback and thanks for sharing – I’ll give the third playlist a listen!

this part in particular:

Each term generates a stream of recordings and each of the streams are then interleaved to make a single playlist.

…sounded to me like it takes X tracks from stream one, Y tracks from stream two, etc. and interleaves the streams, not the recordings from those streams. now that I’ve reread it, I suppose it could be read either way, but perhaps an improvement might be:

Each term generates a stream of recordings and recordings from each of the streams are then interleaved to make a single playlist.

apart from that, it reads pretty well to me~ I do like the recent improvement of splitting out the options for each of the terms~

that would work too, and that might be less stress for [self] too, (they don’t exist yet tho, so they don’t have anything to worry about :joy: )

I’ll try and remember to create a ticket for this tonight, but once we are able to connect these to users, a useful feature would be to filter the results by collection, namely the two owned collection types for owned releases and owned recordings. this would make this feature more useful for those of us who don’t stream music, provided we keep our collections up-to-date… :wink:

Adopted, thanks!

This feels pretty cumbersome, having to keep a collection up to date. To solve this problem, I have finally written the content resolver that resolves a JSPF playlist to a local collection:

It still needs better packaging and docs, but I imagine that this could be installed as a handler for .jspf files. It then resolves the files to the local collection and generates a m3u file and and then calls the system handler for .m3u files, which should in theory open the playlist in your local player.

1 Like

You should try that one again and read the feedback. :slight_smile:

One is worth mentioning, it has the prompt “artist:(Jorge Negrete):5” mode medium. These inputs produce a list with very similar artists only: ‘artist: using seed artist Jorge Negrete and similar artists: José Alfredo Jiménez, Vicente Fernández, Pedro Infante, Miguel Aceves Mejía’. (ListenBrainz)
Compare this with the result when changing Jorge Negrete to Aretha Franklin: ‘artist: using seed artist Aretha Franklin and similar artists: Stevie Wonder, Otis Redding, Marvin Gaye, The Beach Boys, The Supremes, The Rolling Stones, Ray Charles, The Temptations, Elvis Presley, Bob Dylan, Al Green, James Brown’. (ListenBrainz) The result is so different, one can ask, if really the same algorithms were used.
The other difference is the occurrence of the seed artist in the resulting playlist. Jorge Negrete appears ten times while Aretha Franklin appears only four times in 50 tracks. Wouldn’t one expect more than 8% occurrence of the artist, one choose the radio from?

This tool is really interesting to play with! :slightly_smiling_face:


My feedback is around the wording for the Easy, Medium, Hard modes in the docs - the description makes sense (e.g. the video game analogy) but for the docs I would be interested in seeing it expanded to what the mode actually does. Does it simply expand the tags it’s pulling ‘out’ one entity (recording > release > artist), or does it do more?

Because I’m after quite curated discovery rather than ‘I just want a fun playlist for the day’ it would be super useful to know how it’s changing results (and which I should use depending on what I’m trying to generate).

(always happy to help write/edit stuff too btw rob, if writing docs is putting you to sleep)

To me it seems, by looking at the results, that Aretha Franklin is more ‘mainstream’, which is why her playlist is serving up a lot of artists that aren’t super similar, but similar enough, and also found ‘mainstream’ success. Popular artists are going to converge in this way unless some tricky code is written around the problem… I know @rob has already pondered this topic in the past, curious to hear his thoughts!

1 Like

I’ve found that for niche genres/combinations, the easy/medium/hard modes give quite unpredictable changes. Sometimes hard is more varied, sometimes less, and it’s not necessarily ‘harder’. Probably because of how the small pool of users editing that stuff has added tags. I don’t think this is a problem, I’m expecting those quirks, but interesting to note!


australia, grindcore on medium mode
australia, grindcore on hard mode
(hard has more repetition, slightly ‘harder’ imo)

new zealand, grindcore on medium mode
new zealand, grindcore on hard mode
(hard has less repetition, slightly less ‘hard’ imo)

Another thought: I don’t know if it will be too heavy on the servers to do this, but when a search fails to generate a playlist, rather than fail, it could re-attempt the playlist at the next ‘difficulty’ setting (as these are more likely to succeed), with a message to accompany, e.g: We weren’t able to generate a playlist on Easy mode, but have successfully generated a Medium mode playlist.
Alternatively, maybe a shortcut like in the User Feedback Panel like:

  • tag ‘new zealand, grindcore’ generated too few recordings for easy mode. Click here to generate a medium mode playlist

Interesting Taylor Swift appearance on this playlist prompt : P
A few other outliers when I generate similar playlists, or regenerate the same one.

Re-running this a few times it seems the first track is always an outlier, and then another one is interspersed in the next ten or so. Are we trying to offer something different on purpose, for the first track?

When I generate with ‘recs:aerozol + genre’ instead of ‘user:aerozol + genre’ it no longer has an outlier at #1, but still has tracks spread about that definitely don’t have the included prompt tag attached/are wildly different (I like the variety, myself, but if someone asked me for a playlsit of all my fav grindcore, I would have to do some editing)

Overall: this tool just gets cooler :star_struck:
I’m starting to get overwhelmed with fun options tbh, and haven’t dug very deep… looking forward to monkey whipping up a GUI!

I’ve updated the docs to improve this description:

Along with a prompt, the user will need to specify which mode they would like to use to generate the playlist: easy, medium or hard.

The core functionality of LB radio is to intelligently, yet sloppily, pick from vast lists of data to form pleasing playlists. Almost all of the data sources (similar artists, top recordings of an artist, user stats, etc) are ordered lists of data, with the most relevant data near the top and less revelvant data near the bottom. Broadly speaking, the three modes divide each of these datasets into three chunks: easy mode will focus on the most relevant data, medium on the middle relevant data and hard on the tail end.

For almost all of the source entities (see above), this applies in a pretty staightforward manner: Whenever an ordered list of data exists, we use the modes to inform which section of data we look at. However, the tag element is an entirely different beast. Roughly speaking, easy mode attempts to fetch recordings tagged with the given tag, medium mode picks tags from release/release-group tags and hard mode picks tagged recordings from artists. In reality there are a lot more nuances in this process. What if there aren’t enough tracks to make a reliable easy playlist? Then don’t make one and let the user know they could try again on medium mode and that they would get a playlist. There are other heuristics baked into the tag query that are not easy to describe and quite likely will change in the near future as we respond to community feedback. Once we’re comfortable that the tag entities is working well, we will improve these docs.

This idea of modes comes from video games, where players can choose how hard the game should be to play. In the context of LB Radio, the resultant playlist will also be more work to listen to the harder the mode. Which mode to use is entirely up to the user – easy is likely going to create a playlist with familiar music, and a hard playlist may expose you to less familiar music.

Which is now live on the docs page. What do you think?


As for Taylor Swift, yes we have a problem in our similar artists data that overly emphasizes popular artists. We’ve done some very heavy handed counter-acting of this, but we really need to solve this the right way as described by our fiends.

For the artist entity, the first tracks are special – they should correspond to the seed artists. However, for all other entities that patterns you might be seeing are purely random.

Finally, tag data can be amazing one moment and absolutely frustrating the next. This all depends on who has bothered to enter these tags into MusicBrainz. Some genres have people who add tags and those will work better in LB Radio and other genres have fewer and those will work less well. I’ve done a lot of work to make sure that the holes in our data don’t cause shitty playlists to be created, but that isn’t always possible.

Going forward we have plan to improve our datasets as follows:

  1. Calculate similar artist data from the MLHD+ dataset, which will drastically improve the similar artist data.
  2. While the above will make it better, it will make popular artist similarities worse, as already discussed, so we will need to address this as well.
  3. Having more data from more users. We have rather few users right now and I am amazed at how good our data is. If we can triple the number of users, we’ll have much better data.
  4. Add features to LB to make it super easy to add tags to MB. Once people know that adding tags to MB will make LB radio better, we will see the number of tags in MB grow significantly.

I can’t wait until we get past these improvements. LB Radio is really going to kick ass then!


And it got cooler again in the last two days!

  1. We can now use user recommended tracks as a source: “recs:rob” or “recs:rob::unlistened” for a “discovery” type playlist.
  2. We can now use user stats: “user:rob” will pick recordings from the user rob of the past month. “user:rob::year” will play recordings from the top recordings of the last year.
  3. Weights are now optional in the prompt syntax “artist:Portishead:1:hard” is the same as “artist:Portishead::hard”. User and recs can in the future skip adding the user – once LB radio is properly integrated into LB. For not it doesn’t work, but will soon. (This will make “user:” the simplest prompt to make a safe playlist for yourself. :slight_smile:

Docs are updated with all of the above. Have fun and let me know what you think!


Excellent, this part: “Roughly speaking, easy mode attempts to fetch recordings tagged with the given tag, medium mode picks tags from release/release-group tags and hard mode picks tagged recordings from artists.” is exactly what I wanted to know, I personally don’t need anything more precise. Thanks!

In this case Taylor Swift doesn’t have the ‘grindcore’ tag at any level from the recording up (I had checked), and there is no seed artist. She is a popular artist and I have listened to her a bit, but without the grindcore tag present I wouldn’t have thought she would be included? Or is there a bit of fuzziness where sometimes it wont require the tag to be there? (prompt was ‘user:aerozol tag:(grindcore)’)

Amazing work as always

I think it might be because user:aerozol tag:(grindcore) pulls from your history in one stream and pulls from the grindcore tag in another? that might explain the other outliers?


Ohhh, that would make sense, I just assumed that it’s operating with AND by default, like the genres. But yeah it probably is interleaving them, derp.

Maybe more and/or options would be useful! In this case I thought it would be cool to generate playlists in a specific genre based on my listens/library, which would be fun to share and also get an insight into my listening habits in a genre. And also to generate playlists from my recs in a specific genre, which would be an excellent discovery method (my daily jams are great, but a real mixture of vibes).


Understood! :slightly_smiling_face: Before I thought the mode to work just vice versa. Now I will use the hard mode more often since:

That is exactly what I hope for when creating a playlist.

Thank you for clarifying this! :+1: