2017.02.06 20:43 "[Tiff] Qs about support for more than 2^16 IFDs and writing performance", by Dinesh Iyer

2017.02.07 16:16 "Re: [Tiff] Qs about support for more than 2^16 IFDs and writing performance", by Dinesh Iyer

We have had about 5-6 users, mostly from the biomedical imaging community, complain to use about this limitation for reading. They have image stacks of about 100,000 images stored in a single TIFF file. These are not all BigTIFF files. They use applications such as ImageJ, which is very popular among the image processing community, which are able to read such files without any issues.

I can send a standalone C-code for profiling shortly.

Regards,
Dinesh

On Mon, Feb 6, 2017 at 7:58 PM, Bob Friesenhahn < bfriesen@simple.dallas.tx.us> wrote:

On 7.02.2017 2:04, Bob Friesenhahn wrote:

The linear scan is perhaps not the fastest algorithm but it should not take terribly long (much less than

> a second) to scan 65535 entries.e e

And if you multiply a second by 60,000, then you get many hours. Writing a TIFF file with 60,000 IFD-s will call TIFFLinkDirectory() 60,000 times.

Note that such files typically do not fit in L3 cache and thus traversing them may be extremely slow. We are talking mostly of BigTiff format here.

This sounds reasonable. We need a good test case for this (portable C source code) which be used to measure the performance impact and evaluate solutions.