Some of the differences between what I'm proposing and the Release quality marking is that the Quality metric I'm proposing is an aggregation of data derived during the editing process, while the existing quality marking is a single attribute which is the opinion of the few individuals who were involved in setting that attribute and approving having it set. But I'm really not a fan of the idea though in its current form.
Perhaps there is another way to get at the problem. What if we had a process similar to, but separate from the editing process where data can be verified. It could be done as part of the review process of the editing process; or it could be done separately. Essentially, we would allow people to annotate an entity with validation records: annotations that indicate that they have examined certain attributes of the entity and have verified that the data is correct. Entities that have numerous validation records would provide an indication that the entity is of high quality. At least in the opinion of those who have taken the time to look into it.