2002.08.21 16:05 "Re: OT: large memory allocation in Windows", by Bob Friesenhahn
Not all image manipulations would like this, e.g. convolutions which use simple pointer math to address neighbour pixels, all these routines would need to be rewritten. The speed would / could decrease, however this is the favorite solution direction. The rows do not need to be single pixel lines.
The essence of your remark is do not keep it all in memory simultaneously.
This is a common solution. All "serious" image processing packages provide a means (e.g. "tiled" memory) to allow only a portion of the image to be brought in at a time.
>How about a memory mapped file? Could that possibly give you a >linear "pointer space" of 2GB or maybe even more? Of course the file would >not be completely mapped at all times, but maybe it would do the trick... >(Just a crazy idea. I don't know if it would work at all...).
Used this years ago on some Sun1 Sparcstation with 8MB of memory. The C-calls
were the "fseek, fread, fwrite" family, but disk is quite slow compared to on
line memory. I recall a story of using memory on another Sparc over the network
as this was faster than local disk access. OK, in those days everything was
faster than a disk :)
ImageMagick (http://www.imagemagick.org) uses memory mapped files to handle large images. Using memory mapped files allows you to use operations and functions which manipulate memory. The MMU manages caching of mapped pages so operations only cause disk I/O if a page must be paged in or out. Using memory mapped files is no worse than allowing your program's heap grow large enough that the system starts paging. The operation of the MMU and disk is the same. The big difference is that you have programmatic control over what gets mapped to the address space and the size of the address space, whereas with everything on the heap, you have no control and the system may thrash.