TIFF and LibTiff Mail List Archive


1994.12.14 22:00 "JBIG compression", by Kyriakos Georgiou
1994.12.14 23:51 "Re: JBIG compression", by Sam Leffler
1994.12.16 18:00 "Re: JBIG compression", by Fredrik Lundh
1994.12.16 21:02 "Re: JBIG compression", by Kyriakos Georgiou
1994.12.16 21:17 "Re: JBIG compression", by Sam Leffler
1994.12.16 22:04 "Re: JBIG compression", by Rick Richardson
1994.12.16 22:18 "Re: JBIG compression (really G3/G4 decompression)", by Sam Leffler
1994.12.15 09:59 "Further G4 improvement", by Karsten Spang

1994.12.16 21:17 "Re: JBIG compression", by Sam Leffler

Also, can the current libtiff G4 decompression implementation be improved? What sort of improvements can be expected?

What's wrong with the current implementation?

The current implementation is fine, but.. I have in my hands a commercial product (no source) that does decompression in noticable less time. That suggests that there are faster ways to decompress G4 in software, alas my question.

The code distributed in the library runs on a multitude of platforms and is not tuned to any one specific platform. I can easily squeeze a noticeable improvement out of my specific code by tuning it to say, a MIPS R4000 CPU with a particular cache configuration.

I don't want to get into algorithmic issues, my question is, is there room for >15% improvement on G4 for the implementation of libtiff?

Until you cite specific goals (and architecture for running the software) this question is silly. Try measuring the performance of the current algorithm before looking for improvements.

Unless you didn't understand what I am talking about - CCITT Group 4 decompression speed - I find this answer silly. What does the architecture have to do with algorithmic improvements? Perhaps you should read a computational theory and algorithms book before defining what silly questions are.

Well, try tuning algorithms for RISC style machines and you'll quickly find that significant performance differences can be due to cache effects and/or optimal organization of instructions to reflect CPU and compiler pipeline scheduling algorithms. Additional effects are often noticed when working with large images or on multi-user systems where memory contention can significantly impact performance.

Asking for >15% improvement in a vaccum typically means you're asking for a different algorithm.

In response to your 2nd comment, why would I be looking for improvements if I had not measured the performance of the current algorithm? If you remember, few weeks ago I sent some remarks on the new improved G4 decompression running on a pentium.

i didn't connect your previous mail to this one (there was no mention). I would have taken your comment more seriously if you'd identified what you were comparing, in what environment, cited results for both algorithms/implementations, and identified the input data. Then it might have been possible to evaluate the performance figures in such as way as to identify whether your question was answerable w/o a peek at the source code?

BTW, you weren't using an FDIV instructions in calculating the performance figures? :-)