2004.01.15 21:32 "Re: [Tiff] How to determine number of colors in the colormap", by Joris Van Damme
Ah,,,That makes since. I did a little experiment and anded each of the red, green, and blue values with 0xff to wipe out the high order bits...The values are now in the range of 0 to 255.
That's not the way to do it. Unless I'm understanding you incorrectly, you're talking the least significant bits. The most significant bits are an approximation. Marti's solution I've pointed you to is the actual good solution. So shift right 8 bits if you don't wish to make sense of the good solution, but certainly don't and with $ff.
One curious note though. If the largest number that could ever exist in a Red or Green or Blue component is 255 why would libtiff have defined the values coming out of a call to TIFFGetField as uint16*?
My only thoughts are maybe libtiff was written with strictly a "C" interface in mind and therefore function overloading would not have been possible. If they coded it for the largest value that would ever be returned from TIFFGetField then I guess uint16 is the right choice.
Has nothing to do with it. Consider 'inch' and 'cm'. They're both different scales, and you could standardize any such scale and a dozen others. Doesn't change the concept 'distance'. Though you'ld have to do some rescaling to express the same distance in cm that was originally expressed in inch. It's the same with 'red'. One scale says '0' is min and '255' is max. Another says '0' is min and '65535' is max. But it is nevertheless the same 'red', and the same 'min' and 'max' concepts. You could pick any scale you want. The TIFF specification (not LibTiff) choose to pick the 0-65535 scale, aka 16bit scale since it takes 16bits to code up such a value. And that's not completely arbitrary either. This scale allows for more precision compared to the 8bit scale, should you need that precision, and can be easily converted should you want to either encode from or decode to 8bit values, so it's a legit choice.
Joris Van Damme