2008.08.19 05:17 "[Tiff] Regarding DICONDE and its Specification", by Harsha

2008.08.25 09:37 "Re: [Tiff] creating sparse files......", by John

2008/8/22 Rogier Wolff <R.E.Wolff@bitwizard.nl>:

I'm stitching kind of large panoramas. This results in big

intermediate files. On my last run, which took overnight to stitch, I thought 42 Gb of free disk space would be enough. Wrong!

I was a mentor on a google summer of code project to look at this issue. Sadly the student we selected failed, but this is still an item on the TODO list.

In case anyone is unaware, panotools currently stitches panoramas in three separate stages. First, it analyzes the input images and for each sub-image calculates a transform from the input space to the output space. Next, it resamples each input image to a temporary file, writing zero for 'no value', and also writing a mask file indicating which sections of the transformed image are valid. Finally, the set of transformed images are blended together to make the final panorama using the masks as weighting values, and I think also fixing radiometric differences.

The problem here is that, for large panoramas, these intermediate files are very, very inefficient. If you blend 100 images, you will make 100 intermediates, each as large as the final panorama, and consisting almost entirely of zeros (the GSoC project was to combine the last two stages into one using an image processing library which can handle very large images).

This sparse file for tiff idea is a good one to fix this immediate problem, but it seems to me that it's not an ideal solution. A better short-term fix, as someone said, is to get it to use compressed tiffs for the intermediates. And a proper fix for the problem is to not generate colossal intermediate images in the first place. I would be concerned that a feature is going into libtiff that will need maintaining and testing forever, and which will be unused by anyone, even by the original application, rather soon.

John