To quote the very chapter you referred to
So, semantics are the very thing that defines a character acording to unicode. Now, contextual meaning or other atributes are not pertaining to the definition of the character. I don't think that your example with the h relate to semantics.
I honestly think that making a distinction between a letter and a number represented by the same glyph (using their nomenclature) but with very clear and incompatible definitions (another whole conceptual order, in this case) is essential to their mission. More so when they have things like α, ⍺, 𝛂, 𝛼, 𝜶, 𝝰, 𝞪 the last of them literally defined as mathematical sans-serif bold italic small alpha; or ² and a whole bunch of super- and subscript characters with no real distinction of their regular couterparts.
Of course the kerning and height, and other visually appealling advantages corresponds to the font or the renderer used. That is an advantage derived of having a character with an unambiguous value/meaning. But this unambiguity is the true reason why using this characters is an improvement over using letters.
And here's one more: None of those edits would be destructive because in the case that this definitively settles against using this unicode numerals, it would take just a few substitution rules to change it (whereas an inverse convertion would be impossible, if automated).
The Consortium have their rules and have their reasons, but the fact is that this characters exist, are widely supported, and offer clear advatages we could use in the database, and are easly convertible, if it comes to that.