|TIFF and LibTiff Mail List Archive|
LibTiff Mailing List
2016.01.27 19:55 "Re: [Tiff] OpenMP enabled libtiff", by Bob Friesenhahn
The memory mapping does eliminate system call and access time overheads associated with seeking, as long as data in the same MMU page (e.g. 4k block) has been touched before. This is why libtiff memory maps for reads by default under Unix.
Before making comprehensive decisions about degraded performance, try the use case where a large sequence of files is accessed in order (each accessed once for each traversal) and the total amount of file data is much larger than the amount of memory in your system. In that case, do you see the same win or do you now see a penalty which looks very much like what happens when the system runs out of memory ("swapping")?
I don't know why this is. Memory mapping works fine for reads under Windows although the maximum contigious map size is likely to be smaller than on Unix type systems. It does not work so well for writes due to stupidity and missing certain useful POSIX functions like ftruncate(). Many functions which might otherwise be useful do not support large files or behave in much stupider ways (e.g. actually write zeros to the filesystem rather than create a hole) than on Unix type systems.