2006.09.17 09:03 "Re: [Tiff] is there alpha component presentin GrayscaleorPalettecolorimage", by Joris Van Damme
As to convertion to Lab, OK, I'm all for that. So now we do indeed have a 4D space. To make my point, let's encode L, a and b as their natural float value, and alpha as a float value ranging from 0 to 1. So, you're saying the difference between L*a*b*alpha (0,0,0,1) (which is non-transparent black) and (1,0,0,1) (which is non-transparent almost-black) is equal to the difference between (0,0,0,1) (again non-transparent black) and (0,0,0,0) (which is total transparency).
See, we apply weights (full transparent range is equally important as a difference of 1 in the L range). This is obvious if we pick very bad weights as we did here. So what are good weights? Full transparent range is equally important as a difference of 100 in the L range? Or is it twice as important as a difference of 100 in the L range? Why?
I forgot to mention one other reason why I feel this is a challenge. Your 4D model doesn't always apply.
Would you say Lab alpha (0,0,0,0) (full transparency) is closer to (100,0,0,1) (non-transparent white) or (50,0,0,1) (non-transparent mid-gray) or (0,0,0,1) (non-transparent black). I think you'll agree full transparency is equally distant from all three. So this is not normal eucledian 4D space.
Actually, (0,0,0,0) (full transparency) is equal to (100,0,0,0) (full transparency), which gives rise to the convention to always encoding Lab values (0,0,0) for full transparency in most implementations. So the distance between these two encodings of full transparency is 0. So this is not normal eucledian 4D space.
One could be almost tempted to use eucledian distance in a pre-multiplied Lab alpha space, solving this last problem... But still, the arbitrary weights problem stands.
Joris Van Damme
Download your free TIFF tag viewer for windows here: