2002.08.21 13:47 "Re: OT: large memory allocation in Windows", by Rob van den Tillaart
Some further discussion,
- 64 bit compilation
the problem was that there was no memory block large enough available if I recall correctly. 64 bit compilation will not help. (64 bit addressable (2^64 bytes) virtual memory would)
- reorganizing memory requirements of the application such that contiguous large blocks are not necessary. E.g. store the data of a large image as an array of rows.
Not all image manipulations would like this, e.g. convolutions which use simple pointer math to address neighbour pixels, all these routines would need to be rewritten. The speed would / could decrease, however this is the favorite solution direction. The rows do not need to be single pixel lines.
The essence of your remark is do not keep it all in memory simultaneously.
- make sure the application can run for ever with the memory that can be allocated in the beginning. In case of applications working with images, we might allocate the one image we load plus another one for some processing purposes and make sure that we never ever free these and need to allocate new large blocks.
like the previous solution much more.
Also, I have been thinking about the third point you mention as a solution as well. Does anyone by chance know if there is an autopointer class out there that is of help with memory reorganization?
>> MERGED MAIL
>How about a memory mapped file? Could that possibly give you a
>linear "pointer space" of 2GB or maybe even more? Of course the file would
>not be completely mapped at all times, but maybe it would do the trick...
>(Just a crazy idea. I don't know if it would work at all...).
Used this years ago on some Sun1 Sparcstation with 8MB of memory. The C-calls were the "fseek, fread, fwrite" family, but disk is quite slow compared to on line memory. I recall a story of using memory on another Sparc over the network as this was faster than local disk access. OK, in those days everything was faster than a disk :)