This thread has gone a few different directions at the same time, but I’ll try to address all the points I can 
Audio data / AcousticBrainz
LB does not replace at all dev needs […] doesn’t provide at all any audio features
Indeed we don’t offer a replacement for that Spotify feature; quoting the blog post:
While not everything that Spotify is enshittifying has a direct replacement with ListenBrainz, we can at least offer a path forward for developers.
More specifically we are not equipped to do audio analysis, feature detection, etc. Shame as it might be, this requires a team of researchers to develop good algorithms and stay up to date with technological improvements.
The data that is in AcousticBrainz is not reliable, and starting from scratch with no better algorithms and waiting years to see if the result would be better is a waste of resources for our small team.
The first issue is developing accurate audio analysis algorithms (research domain), the second is that without actual music files (only audio features, not audio files are submitted to AB) we can’t re-process tracks if said algorithms are improved over time, leaving you with completely unreliable datasets.
Recommendations
I’ll start by saying that if anybody reading this has experience in recommender systems, your help would be very much appreciated!
Where Spotify employs —or at least used to— a whole team of researchers in the music field, our team has a whole two people (neither of which are researchers) working on all aspects of the LB back-end, which includes (but is not limited to) the recommendations systems.
The best we can do is read papers about recommender systems and implement them ourselves. We use Spark for this, if anyone is curious: Collaborative Filtering - Spark 3.5.3 Documentation.
Then comes evaluating the results: we get some feedback from you our users, but not at a scale that would allow us to easily draw solid conclusions. So we slowly crawl in the dark with our hands in front of us, improving bit by bit.
These are very complex computational problems to solve, especially without a big team, on a shoestring budget and with limited user data.
We can’t snap our fingers and “try something different that does work”. This is an unrealistic expectation.
I’m well placed to say that having opinions and ideas on how the recommendations should work does not translate to the reality of implementing them. Despite reading lots of papers on the topic myself, I can only hint at ideas for improvements, while the technical implementation is out of my capabilities as a developer.
Current state / future improvements
Our current priority is a more technical one: improving the stability and reliability of all the different systems that compose LB. With only two people working on the back-end (and other projects at the same time), that does not leave time for reading and digesting research papers, implementing new algorithms, etc.
The recommendations will continue to improve, bit by bit over time, but I request everyone’s patience —or assistance. Constructive feedback helps! And while I understand your frustration (we also want better recommendations!), it is sometimes disheartening to read harsh feedback or hand-waving suggestions.
We have a few avenues of improvement already in mind, some to help remediate the not-enough-data problems by leaning more on MusicBrainz metadata. This will take time.
Music discovery
In the meantime, on the topic of “try new things, have multiple options”, we have other music discovery tools in LB that require more active participation and don’t have the computational pitfalls I described above: