In my work on developing VideoBrainz concepts I came up with an idea that I think will have broader appeal than just with VideoBrainz - so I’m sharing it here. Perhaps this, or something like this have been proposed before, but I haven’t been able to find it.
I propose that we add two metrics to all entities. These would be best done in combination with NES:
Completeness. This is a measure of how complete the data is on the entity. Not all attributes of an entity should necessarily contribute to the completeness metric. Think of it this way. If the absence of the data means that the entity is considered incomplete; then that data should count. This could be a very useful metric in focusing on what needs the most work. This metric would be fairly objective. Also, all attributes of an entity don’t need to be weighted the same. The idea here is - the lower the completeness score; the more folks should try to work on it.
Quality. This is a subjective score most of the time. How this is calculated is still fuzzy in my mind. But I would tie it to reviews. The basic idea I’m shooting for is that reviewers would provide a quality ranking. But the final quality metric wouldn’t simply be an average of those votes. I think that reviewers should also have a ranking based upon how respected they are in the community. And, the more times the data for an entity has been reviewed should be factored in. So, for example, an entity with an average ranking of 8 (out of 10); but with a single review; should have a lower quality metric than an entity with an average ranking of 8 (out of 10); but with 100’s of reviews.