1999.09.24 12:24 "Tiff 12 bits/pixels!!!", by Mauricio Cunha Escarpinati

1999.09.27 03:24 "Re: Tiff 12 bits/pixels!!!", by Niles Ritter

Count yourself lucky; I once had to deal with a totally goofy TIFF file (I think it was from a government agency) that was 11-bit TIFF, packed. Just imagine the fun bit-masking that stuff out....

Wow! The TIFF specs is not very clear about how such files should be handled. Could you please provide me with a sample image!?

I will talk to my old friends at JPL to see if they have any nonclassified simple test images...

It's been a while but let me see if I recall the TIFF bit stuff.

If the image is packed (compression type 1) without padding to the next power of two, I think the TIFF spec is fairly clear. Page 30 of the spec indicates that the data must be stored as type BYTE, with Bit Fill order from high to low (only rare CCITT fax variants use reverse bit-order). TIFF Byte order is ignored. So the algorithm is, write out the successive 11-bit numbers in a row of pixels with high bit (bit10) to the left

(0bit10, 0bit09, ... 0bit00)        (1bit10, 1bit09, .....1bit00)   ....

Then fill successive bytes from high bit to low bit with these values:

     byte0 =  [ 0bit10  0bit09 ...0bit03 ]   (base 2)

     byte1 =  [0bit02  0bit01 0bit00  1bit10 1bit09 ... 1bit06]
     (base 2)

     byte2 = 0x [ 1bit05 ...]  (base 2)

and so on. What this means is that, defacto, non power of two bit storage is a "big-endian" mechanism, because that is what you would get if you were to apply this to a 16 or 32 bit integer. In reality, however, the spec indicates that the native SHORT and LONG sample types should be indicated for powers of two, and so Intel boxes can generate little-endian data as usual.

Correct me if I'm wrong.

--Niles.