2006.03.04 03:05 "[Tiff] LZW Compression with 16-bit TIFF", by Frank Peters

2006.03.04 12:21 "Re: [Tiff] LZW Compression with 16-bit TIFF", by Joris Van Damme


Why should 16-bit data be any different in principle that 8-bit data? A change in data length should not have any bearing on compression (at least that's what I would first think).

What Bob answered is correct in my opinion. But it's not the main issue here. The main issue is something you could call 'amount of chaos'. If expansion to 16bit data is just a shift left 8 bits, then you add almost no information, no chaos, and should compress to some size that is very near the size of the compressed 8bit data. However, if expansion to 16bit adds enourmous chaos, random noise in the lower bits for example, compressed size would increase considerably.

All compression algorithms are designed to detect more or less predictable and/or repeated patterns. They code these patterns in less bits. This does, however, come at a cost. The total number of possible patterns remains constant. So if you use less bits for some... you must use more bits for some others. Thus, the worst case scenario is that compression actually increases the size.

The fact that flate compression shaves of only about 2 meg on a total of 36, seems to confirm that likely you found such a rare case where LZW compresses inefficiently and actually increases size. My guess is that the additional 8 bits in your 16 bit data, are very noisy.

> As for horizontal differencing

I should double-check, but I thought horizontal differencing is only legit on 8 bit data. It recently got defined on 16bit floating point data, too. But I didn't think you can legitimatly use it on 16bit int data. Not sure if I remember correctly though.

Anyway, if the additional 8 bits of your data are very noisy, as I think they are, horizontal differencing will not be able to cure much.

Joris Van Damme
Download your free TIFF tag viewer for windows here: