2008.01.10 19:03 "[Tiff] LibTiff..Reading partial strips", by Nikhil Shirahatti

2008.01.11 07:35 "Re: [Tiff] LibTiff..Reading partial strips", by

Bob,

I havent tried it..but the cautionary note that " It is not always possible to do so due to decompression constraints" scares me. I also read on the archive that "that *strip* *chopping* only works for uncompressed IFDs.". How common(or uncommon) is this? When would one use compressed IFD's?

 The tiff's we get can be compressed or uncompressed formats and we have to
support them all. :)

It is quite likely that other folks here know a lot more about this than I do so hopefully they will chime in. I don't know what a compressed IFD might be since and IFD is just a structure.

It's a misnomer. The intention there was to refer to image compression as stated in the IFD. Strip-chopping works only on the majority of stripped uncompressed images (i.e. IFDs with Compression tag set to None), it works on no compressed images nor tiles images.

My impression is that libtiff decompresses into a buffer and then uses part (or all) of that buffer.

Strip-chopping works differently. It patches up relevant tags (RowsPerStrip, StripOffsets and StripByteCounts) so as to make it seem there's a lot of small strips rather then a smaller number of huge strips. This can be done due to the predictable offset of uncompressed data.

Some compression algorithms/options either require decompressing a huge amount of data (e.g. the whole strip), or else using simultaneous streaming decompression at the same time that data is consumed by the rest of the library. If libtiff decompresses in blocks and does not stream (as I suspect) then sometimes it will need to decompress the entire strip before it can be used. G4 FAX compression with a strip-per-page is likely a good example of that.

You suspect correctly. LibTiff works largely on with complete buffers, rather then maximum streaming. That makes it unsuitable for large strips/tiles on small machines.

A streaming approach is often much more scalable. But it's a completely different design, and it needs to be consistent with design of colour conversion and other application layers.

As to the original question, it's often hard to do real 'regional decoding'. It so happens I'm currently working on that exact concept in my JPEG codec. The problem is that many compression schemes, by their very nature, don't support ways to locate the data directly from the pixel indexes. It is often necessary to scan through the data and do at least the very first steps of decoding so as to derive the offsets of the region of interest. In the JPEG scheme, this very first step that needs to be done in as far as RST markers don't come to the rescue, is Huffman decoding. Whilst it cannot be avoided, it's still a lot faster to do Huffman decoding for positioning only (without storage of results that is), compared to a full decompression (Huffman, dequantization, inverse DCT, desubsampling, and possibly colour conversion). But it does require specific support on the part of the decoder.

LibTiff subcodecs just aren't designed to do such first-step-decoding-for-positioning-only, so you're out of luck unless you're willing to redesign and rewrite them.

Best regards,

Joris Van Damme
info@awaresystems.be
http://www.awaresystems.be/
Download your free TIFF tag viewer for windows here:
http://www.awaresystems.be/imaging/tiff/astifftagviewer.html