1996.12.05 07:48 "(more than 8 bits) TIFF images", by Claire Rossignol

1996.12.06 00:13 "Re: (more than 8 bits) TIFF images", by Sam Leffler

The BitsPerSample required values for Grayscale and Palette-color Images are 4 and 8 according to the TIFF 6.0 Specification. Does it mean that TIFF does not support BitsPerSample values > 8 bits for these images?

If not, do you know any viewing tool which is able to display 12 bits-TIFF images?

I use 16-bit TIFF images and have submitted changes to the shareware xv viewer to display both 16-bit RGB and grayscale images. I

Notice in the original message, he's talking about Grayscale and Palette-color images, not RGB. The TIFF 6.0 standard does support 16 bit RGB images, and so does libtiff. However the standard doesn't support 12 or 16 bit Palette or Grayscale images, and I don't know whether libtiff does.

Come on folks, the TIFF spec is totally wishy washy in just about any and every area. The spec defines mechanisms by which a wide variety of different format data can be stored in a file. You can trivially specify 12-bit, 16-bit, a gazillion-bit data of any different Photometric interpretation (i.e. image type). The issue isn't whether you can specify or store this data, the issue is whether anyone can, or will, read it back when you ask them to. As is noted in the spec (or perhaps just very common knowledge), if you have data >8 bits and <16 bits then it is common practice to store the data as 16-bit samples. Most anyone that'll handle 16-bit color data will also handle 16-bit greyscale data. Given that palette images are specified as N-bit indices into 16-bit colormap tables I don't see why anyone would want or need 16-bit indices for this sort of image.

libtiff is typically not the issue in handling any or all of this stuff; the issue is the application that uses the library. (Unless the above reference is to the TIFFRGBAImage support, in which case please RTFM.)

Sam