Work in progress MB User Survey

This is one of the reasons the survey took so long to get of the ground. The original idea was to focus on UX questions but seeing as we don’t have an UX person at MeB and I couldn’t really get any feedback on what good questions to help improve UX would be, we decided to launch it as-is and to just collect some general feedback, knowledge of other MeB projects and demographics info.

Hopefully this survey will be at least somewhat useful for improving MB’s UX.

Probably around two months, we considered one months but that might not be enough, especially considering a lot of people are probably on holidays during the summer.

Survey deadline

The survey is going to close this Sunday, September 3rd at 23:59 UTC.
Please complete it if you haven’t already, every bit of feedback is important!


Less than three hours left and we’re sitting at 1,195 answers, can we get just a few more to break 1.2k?


The survey is over!

We’ve got exactly 1,200 answers (took some nagging and shaming to get those last few answers in) which is definitely a lot more than I was expecting, thank you to everyone who participated!

The fun part of analyzing the data and preparing it for release begins now, expect to see some sneak peeks here or in the #metabrainz IRC channel.



@Leo_Verto got caught up in the currents of life, but almost exactly 6 years later he has sent me the survey results. I am going to work on sharing them with you soon (though I don’t feel like I have to hurry tbh!)

They are an interesting historical snapshot, as well as still having some relevance for today. And when we run more surveys in the future we can use this to look at how things have changed.

Some of the free-text comments are also very encouraging (and funny!)

I’m giving you a ‘heads up’ now because I want to make the survey data/the .csv public, and I want to crowd-source a risk assessment for that. Can you think of any issues with making the results public? I have already anonymised (wiped) the email column for the shareable copy, and searched the free-text fields for emails/identifying information. But there may be another risk that I haven’t thought of.


It’s honestly hard to say whether the dataset is truly clean without having seen it first-hard. I earned a baccalaureate in Applied Statistics over twenty years ago, and while I’ve spent the bulk of my time since solving engineering puzzles in I.T., those times I’ve tried to leverage my education by dabbling in data science are replete with well-intentioned missteps. In the last couple years things have only become more treacherous by several orders of magnitude with so many common software tools now including “A.I.” and amended their T&Cs to preserve for themselves the right to make use of anything you tell/show them in perpetuity (a la Adobe).

My current operating principle is that most people, and most assuredly myself, are ill-suited to anticipating the behavior of nefarious minds. Anyone for whom that is also true is, in my experience, well-advised to avoid responsibility for becoming an unwitting aid to those who invariably pounce on anything of value.

Is there a particular reason you’re considering publishing the dataset itself rather than first converting it into chart/graph form for those interested to see the results in aggregate? Unless there’s some useful relationship to when the submission was received that I’m missing, I expect that would be the only information offering any insight and thus all that anyone else need see, no?

In this case a lot of the survey questions are free-text, and cannot be summarised/graphed effectively. I probably would not suggest running a survey of this size like that again (I would stick to graphable results) - but the upside is that I found a lot of the commentary to be extremely interesting!

If anyone’s wondering, I am expecting to complete the graphing/summary/make a blog post before or during the next summit :slight_smile: