TIFF and LibTiff Mail List Archive


1999.11.02 18:27 "RE: grayscale tiff", by Richard J. Otter
1999.11.02 20:53 "Re: grayscale tiff", by Chris Friesen

1999.11.02 20:53 "Re: grayscale tiff", by Chris Friesen

Many desktop scanners will produce grayscale image with a 12 bit range. BitsPerSample could be set to 12, but this implies that the data has to be packed. If we want to keep the data on byte boundaries, we could store it in a 16 bit field with BitsPerSample set to 16. We find that more apps will open a 16 bit image than a 12 bit packed image.

The problem now is that we have lost the value of the original data range. The data range is necessary if the stored transmitance values are to be converted to optical Density values. Without other information, one would have to assume that the range of values is from 0 to 2^16. One could look at the actual values in the image, but this still does not give the range necessary to convert to Optical Density.

So- does it seem reasonable to use the SMinSampleValue and SMaxSampleValue tags to state that the actual range is from 0 to 4096?

What about simply shifting the original 12 bits to the left by four bits to take it out to 16 bits? This would result in an image that looks right when you view it on the screen. An even better method is to use

(original 12 bits << 4)+(original 12 bits >> 8)

as this allows the final image to cover then entire 16-bit range.

In terms of desktop scanning for digital imaging, this gives the best results in my experience. As for the conversion to optical density--I don't know how that works so I don't know if this would hold up, but it seems to make more sense than storing the 12 bits as the first 4096 intensity levels out of 65536. I would expect that you could simply treat the shifted image as a 16-bit image for the purposes of conversion, with a range of 0 to 65535.

Any glaring errors in my reasoning?