
Thread
2004.08.12 21:58 "Re: [Tiff] From bmp to Tiff", by Joris Van Damme
I should agree with Bob. I'm working with images which should be resized in the planar space, defined by the data format (i.e. floats should be resized as floats). Of course, there are can be cases when the color space becomes a valuable factor for the image perception, but it is not the major factor.
Yes, well, even I agree with Bob. I mean, I cannot see any real difference in quality when the same algorithm is applied to RGB or CIE L*a*b*. The quality and the correctness of the algorithm, itself, is far more important, for sure.
This is why, nevertheless, I said what I did, and still stand by it:
- It is theoretically more correct to do in with L*a*b*. I believe apps like Photoshop lead us to believe that one can as a general rule apply any algorithm in any color space. I believe that to be incorrect. There is only one unsharp mask, for example, and converting to CIE L*a*b* is, at least theoretically, an implied pre-processing step, implied by the fact that the algorithm attempts at making sense to the human perception (as opposed to making sense to cathode ray tube voltage levels or printer ink dosages).
- When processing 16bits per channel RGB, which is not so uncommon anymore, it is plain ridiculous to apply such an algorithm in RGB and still hold on to the 16bit per channel precision. These sixteen bits yield way more precision then the human eye can see in the context of an image. When an app is said to process such precision, but applies such algorithms in the wrong colorspace, it is fooling its users. The error against the theory is far greater then the least significant of the 16 bits, even if it is not perceptable.
- Errors accumulate. The don't add, they multiply. Typical image processing involves a lot of subsequent processing steps, and still the image is typically handed out or stored to be processed again. That is why 16bits per channel (or higher) precision makes sense, despite of the fact that human perception does not have such a high resolution.
- The theoretical point of view is more important when talking about algorithms that, as a general rule, process colors that are further apart, like eg alpha blending. The fact that the error is not at that significant when talking about downsampling, is a mathematical artefact, due to neighbouring pixels having similar colors, statistically speaking. Rather then mentioning the theory when talking about alpha blending, I think it is more correct to mention the theory where it applies, and next say that it is least important in the case of resampling.
- One last reason why I said what I did, is that I grew tired of the simple fact of color space validity being generaly completely ignored. One should at least start from theoretical perfection, even if next building algorithms that work efficiently instead. In theory one does not just choose source color space, nor does one just choose destination color space. Source color space is determined by source. Necessary pre-processing color space convertion is determined by algorithm. Destination color space is determined from that. E.g. when downsampling a 4bit Y (as in YCbCr) image, theoretically perfect pre-processing convertion should yield floating point L* (as in L*a*b*), and the result is also floating point L*. When downsampling 8bit per channel CMYK, theoretically perfect pre-processing is to floating point L*a*b*, and the result is again the same. I think it is important to acknowledge that instead of a) leaving things up to unknowledgable users like Photoshop does, or b) pretending to yield 16bits per channel precision while algorithms are applied to wrong color spaces yielding invalid and meaningless results such as interpolated or averaged cathode ray tube voltages.
Anyway, your detailed answer should be added to FAQ and further discussed in the mailing list (I'm sure, there are a lot of image processing experts here).
As I mentioned before, I'm interested in building a LibTiff FAQ, completely seperate from the TIFF FAQ. I see this LibTiff FAQ not as a single (long) page like the TIFF FAQ, but more as a collection of pages, each page bearing code and explanation for a single short application. Indeed, this question and the answers I think is very suitable. Of course, input would be taken from a TIFF file and output would be written to a TIFF file, using LibTiff. That way, it covers using LibTiff as well as documenting a common real-life and related application.
The recent question about cropping may be even more of a suitable example of such a FAQ entry. It would demonstrate how to detect and make use of tiling or striping in the source image to only decode and process those tiles/strips that contribute to the destination TIFF.
My handicap is that it's been 7 or 8 years since I coded C. I've only been reading it all those years, my coding is done in Delphi ObjectPascal... That, and the fact that I'm still finishing up the Tag Reference, and planning another entry in my site's TIFF section (I'd like to tell you what exactly, but then I'd have to kill you ;-)) before I even start on the LibTiff FAQ... May take some time. Rest assured all in this list is carefully archived though... ;-)
An alternative for waiting a while for the LibTiff FAQ to get started, is that people here rise to the occasion and build good entries from threads like this one, writing the C code needed to demonstrate the stuff, carefully making sure to include excellent interfaces to LibTiff (which is after all the primary goal). In that case, I'm happy to just contribute hosting and FAQ entry management.
Joris Van Damme
info@awaresystems.be
http://www.awaresystems.be
Download your free TIFF tag viewer for windows here:
http://www.awaresystems.be/imaging/tiff/astifftagviewer.html