2016.01.25 18:25 "[Tiff] OpenMP enabled libtiff", by Aaron Boxer

2016.01.27 19:55 "Re: [Tiff] OpenMP enabled libtiff", by Bob Friesenhahn

Thanks, Bob. So, are you saying that as TIFF is currently designed, memory mapping is beneficial Because of the random access?  ? This was my experience on windows: turning off memory mapping when using libtiff degraded performance.

The memory mapping does eliminate system call and access time overheads associated with seeking, as long as data in the same MMU page (e.g. 4k block) has been touched before. This is why libtiff memory maps for reads by default under Unix.

Before making comprehensive decisions about degraded performance, try the use case where a large sequence of files is accessed in order (each accessed once for each traversal) and the total amount of file data is much larger than the amount of memory in your system. In that case, do you see the same win or do you now see a penalty which looks very much like what happens when the system runs out of memory ("swapping")?

My other question is: why is unix and windows treated differently in tifflib? call allows sharing of mapping between different   mmap file handles, while on windows this is turned off. I think it would be nice to have on windows.

I don't know why this is. Memory mapping works fine for reads under Windows although the maximum contigious map size is likely to be smaller than on Unix type systems. It does not work so well for writes due to stupidity and missing certain useful POSIX functions like ftruncate(). Many functions which might otherwise be useful do not support large files or behave in much stupider ways (e.g. actually write zeros to the filesystem rather than create a hole) than on Unix type systems.

Bob Friesenhahn
bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/