2017.06.27 20:51 "[Tiff] Excessive memory allocation while chopping strips", by Nicolas RUFF

2017.06.28 19:01 "Re: [Tiff] Excessive memory allocation while chopping strips", by Rob Tillaart

(long time no see :)

It makes sense to have hard limits in place where exceeding those would cause undefined behaviour. (e.g. due to 32 bit overflow etc)

Furthermore some soft limits that would make systems e.g. extreme slow (zillion allocs whatever) makes sense too but these should be overrulable if people really want to. Possibly use #define for the soft limits in a config file?

In the end these limits will depend on the target system where the application runs..

wrt the question:

Very large strips and or tiles are used by people who expect that it will increase the performance of their application in some sense, and yes there might be some gain.Most computers can handle MB easily (even a .$35 RaspberryPi 3 can do serious processing). 64 GB of memory (on a multicore 64bit cpu) is not rare anymore in an age of

 "BIG" data (recall BIG Tiff came first ;)

For some types of devices row based processing is the natural way of working. If I look e.g. at a wide format (around 60" or 1.5 mtr) scanner at 2400dpi results in 144K pixels per row @ 4 bytes per pixel is about 0.6 MB per row. It could have 300K rows (120" or 3mtr), resulting in 200GB image files. Increasing the bitdepth to 16 bit per channel brings us to 0.4 TB per image.

Stitching applications are also quite common and these can make extreme large images in one or both dimensions. Large rows would not be strange.

So in short, there is a need to support large images, which implies larges tiles/stripes....

<thinking out loud modus started>

I dont think that a strip should be as big as the imagewidth for every possible value. From some distance (abstract level) there are only tiles (in extreme heigth == 1 && width == imagewidth ==> single row). If imagewidth is "too" large one could split that row in e.g. 4 tiles (heigth == 1 && width == imagewidth/4 ) to be processable.

maybe all the lib algorithms should become tile based and should we deprecate stripe based ones...

</>

my 2 cents,
Rob Tillaart

On Wed, Jun 28, 2017 at 5:42 PM, Bob Friesenhahn < bfriesen@simple.dallas.tx.us> wrote:

I think you misunderstood me. I meant the problem that uncompressed strip size can be quite large and, as far as I know, can't be validated without decompressing the data. A hard limit on strip size would cover that neatly and sane applications will keep it reasonable anyway.

I agree that implementing a hard limit on strip/tile size (including when writing) is beneficial.

The question is what the justification for using very large strips/tiles and what is the largest justifiable large strip/tile size?

For a very large image, if a strip is just one row, then it needs to be allowed to be large enough to support the maximum image width.