2008.08.19 05:17 "[Tiff] Regarding DICONDE and its Specification", by Harsha

2008.08.22 18:11 "Re: [Tiff] creating sparse files......", by Bob Friesenhahn

Absolutely. And I'm suggesting making it faster. If there are /any/ nonzeroes, the "isallzeroes" function is likely to terminate quickly. It isn't likely all nonzeroes are at the end.

That sounds reasonable.

If we make several applications faster, and one slower why is this bad?

Do you mind if the application which becomes slower is your own application?

Performance changes, or any change at all, is usually a tradeoff. You make some (unlikley) path slower, to improve on the (likely) paths.

It is not clear that images containing large blocks of zero bytes are the norm. Perhaps certain types of bilevel images would have large blocks of zero bytes if they were not compressed but these images are normally compressed.

Why did I suggest this?

  1. It's simple. It's just about 10 lines of code, and it will make a marked perofrmance improvement.

While libtiff is a pretty boring open source project, libtiff itself is one of the top three image format libraries in the world (the other two are libjpeg and libpng) so any included enhancement for holey files should be very well proven on many types of systems (even Microsoft Windows), or be a compile time option.

Not all Unix systems seem to support creating holes. For example, Apple OS-X's HFS+ does not support holes. Windows FAT type filesystems (still very much in use in millions of systems/devices) do not support holes. It is unlikely that the ISO9660 filesystem (used on CDs) supports holes. It is likely that some popular filesystems will fail to seek past the end of the file or will return random bytes (or parts of some previously deleted file) for the uninitialized portions.

Bob Friesenhahn
bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/