2019.04.23 21:24 "Re: [Tiff] TIFFWriteScanLine - buffers to RAM before flushing to disc?", by Bob Friesenhahn
I noticed that when using LibTIFF to write a BigTIFF in a scanline based way that it seems like TIFFWriteScanLine doesn't immediately write to disc and flush memory (even though I see a call to TIFFFlush inside the code for TIFFWriteScanLine)... If I write lines in a loop, RAM utilization increases and when I finally call TIFFClose(), there is a delay as the file seems to be actually written, and then all the memory frees. I haven't checked yet to see if the behavior is similar with using Tiled output.
Can you tell us more about your program and the operating system you are using? Is the program generating image data from scratch, or is it being read from a different file?
Libtiff prefers to memory map its input file if it can. This can result in apparent decrease in available overall system memory as the input file is read since memory mapping is a form of caching even if the memory may be returned to the OS on demand.
The operating system normally provides a filesystem cache and uses it to cache data which has not yet been flushed to disk. For some filesystems (e.g. zfs) the amount of memory the system might use for large and fast writes may be very large.
Is this expected behavior? These are large images, where a given scanline can easily be 150,000+ pixels.. Is there a way to stream lines to disc without the internal buffering?
I doubt that this internal buffering exists. The only buffering I am aware of is the strip-chopping feature which allows huge strips to be handled incrementally using per-row scanlines. This works by diminishing the amount of memory the applications needs to use by increasing the number of I/Os.
If you can reveal the operating system and filesystem you are using, we can surely provide more assistance.
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Public Key, http://www.simplesystems.org/users/bfriesen/public-key.txt