1999.11.30 22:23 "libtiff 3.5.3 release.", by Michael L. Welles

1999.12.01 16:49 "RE: libtiff 3.5.3 release.", by Darrin Cardani

In my experience, CCITT G3 and G4 perform well only for certain kinds of images. I can't define for you exactly what will and will not work well (maybe somebody else knows this?), but I do know that images with a lot of small, randomly placed features (like noise) will not compress well at all.

The huffman encodings used in CCITT compression are optimized for handwritten or typed letters and images. That is, the types of things that people usually fax to each other. So it's optimized for images where each scanline is similar to the one above it in terms of where the black and white runs start and how long they are. Using it on images that have Floyd-Steinberg type dithering, for example, will not work very well, in general. The error diffusion used in such algorithms often results in consecutive scanlines being extremely different from one another. Even patterned data, like a Stipple pattern, can cause the compressor to not work very well. You may still get better results than not compressing or compressing with RLE, but it won't be as good as images that have the characteristics for which it was designed.

In one implementation I did of CCITT compression, during testing I had a pointer bug where I was pointing to a random place in memory instead of to the image data. I was supposed to be compressing a 26 Meg bitmap. The resulting file was random noise, and was 192 Megs. Needless to say, I found and fixed the bug immediately! :-)

----
Darrin Cardani
dcardani@totalint.com