2008.08.19 05:17 "[Tiff] Regarding DICONDE and its Specification", by Harsha

2008.08.22 16:52 "Re: [Tiff] creating sparse files......", by Rogier Wolff

On Fri, Aug 22, 2008 at 11:40:06AM -0500, Bob Friesenhahn wrote:

I can't say. It does not really matter how many such applications there are (just takes one) if the code is to be placed into libtiff itself. Libtiff is a high-performance library which is expected to go super fast as long as compression is not used.

Absolutely. And I'm suggesting making it faster. If there are /any/ nonzeroes, the "isallzeroes" function is likely to terminate quickly. It isn't likely all nonzeroes are at the end.

So, why does it "only take one"?

If we make several applications faster, and one slower why is this bad?

Performance changes, or any change at all, is usually a tradeoff. You make some (unlikley) path slower, to improve on the (likely) paths.

Yes, enabeling compression should work wonders. Somehow I'm stuck with an application suite which suddenly lost the option to pass the compression flags around.

I see. If you did have control over the application, then you could supply your own I/O module which does exactly what you want without modifying libtiff at all.

Yes, Of course. It's all open source. So I can go in and fix it. I've tried to compile it. I've seen the mess. It's messy. Having libtiff create sparse files is a) easy b) generally useful.

Given that this is the case, having libtiff create sparse files for files that ARE sparse, seems like something that is useful in general. For example, in the application suite I'm talking about (hugin/panotools), someone might have decided that for the short-lived temp files, compression and decompression wastes CPU cycles.

If these are open source applications then you should be able to modify them and submit a patch to the authors.

Sure. That will happen.

I have written a filesystem that doesn't store identical blocks twice, but just once. So in this case, all those zeroed blocks would end up on disk just once. Problem solved.

There are more solutions possible to a problem. Especially if you have the source to everything.

Why did I suggest this?

  1. It's simple. It's just about 10 lines of code, and it will make a marked perofrmance improvement.
  2. It's generally useful. It's the "nona" program from "hugin" that writes the tiff files from hugin. But after stitching, enblend also writes a big tiff file. This patch in libtiff will also make enblend faster. It generates just ONE file, so it will waste only about one Gb of disk space in my case, whereas from the nona output the difference was more like 40Gb. And when enblend is done, I'll edit the file with "gimp", and whoa! gimp also upgraded to write the tiff more effciently!

        Roger.
>

--

** R.E.Wolff@BitWizard.nl ** http://www.BitWizard.nl/ ** +31-15-2600998 **
**    Delftechpark 26 2628 XH  Delft, The Netherlands. KVK: 27239233    **

*-- BitWizard writes Linux device drivers for any device you may have! --*
Q: It doesn't work. A: Look buddy, doesn't work is an ambiguous statement.
Does it sit on the couch all day? Is it unemployed? Please be specific!
Define 'it' and what it isn't doing. --------- Adapted from lxrbot FAQ