1994.12.16 22:18 "Re: JBIG compression (really G3/G4 decompression)", by Sam Leffler
Also, can the current libtiff G4 decompression implementation be improved? What sort of improvements can be expected?
What's wrong with the current implementation?
The current implementation is fine, but.. I have in my hands a commercial product (no source) that does decompression in noticable less time. That suggests that there are faster ways to decompress G4 in software, alas my question.
I cannot speak for the G4 decompression in libtiff, but I can vouch for the fact that the libtiff G3 decompression is not as fast as at least my own implementation. However, I am not at liberty to provide the source code since it belongs to my employer.
In my experience with G3, I found the fastest way was to completely unroll the decompression. I tried numerous different 'elegant' algorithmic approaches over a period of about 3 years and discovered that the simple brute force unrolled approach was always fastest. The decoder itself is something like 700 lines of nested C language if-else statements.
The algorithm that Peter Deutsch uses in the 3.x versions of Ghostscript are similar to the algorithm you seem to be describing. My feeling on all this is that the current decoder in the library is reasonably portable and plenty fast enough for most people's needs. Until it's not, I have no motivation to change it. I'd welcome a replacement for it. I'd also welcome some good performance analysis of the existing code. I wouldn't stop with the G3/G4 decoder however, hit the other ones too! (I'm sure folks would like a faster LZW decoder too.)