2006.09.27 13:31 "[Tiff] Horizontal prediction for 16bit data", by Joris Van Damme

2006.10.07 00:44 "Re: [Tiff] Inverting color space values in a TIFF file", by Joris Van Damme

Richard,

My reasons for adding it to the tiffcrop tool are to be able to "fix" these inverted scans in an image processing pipeline

That's a perfectly good reason, of course, and as the colorspace here is singlebit black and white if I understand you correctly, there is no problem of definition, nor is there a problem of implementation. I can see it makes sense to put it in a TIFF tool when applied to singlebit black and white.

I was aware that the PHOTOMETRIC MINISXXX would need to be updated as

> well...

No, you misunderstood, or rather I didn't make myself clear, I should have inserted the word 'instead' somewhere in the original comment.

0 in a image with Photometric PHOTOMETRIC_MINISWHITE, means white. But if you just change that Photometric to PHOTOMETRIC_MINISBLACK, and leave the image data intact, that same pixel that has value 0 suddenly is black.

0 in an image with Photometric PHOTOMETRIC_MINISWHITE, means white. But if you just change that pixel to value 1, and leave the Photometric intact, that same pixel that changed value suddenly is black.

So you've two options in your implementation of inversion. Don't apply them both, because double inversion in this case equals a null operation. Instead, pick the easiest and fastest, which is clearly changing the Photometric only.

Can you refer me to a source of documentation on LAB for my future study? I'd at least like to understand it better.

I'm not sure I can help you here. Much of what I know, or think I know, is from putting many sources together with experience over many years. I find that many of these sources individually have other approaches. Many just fiddle with RGB and do local corrections and gamma compensation and such, from experiment, to get some acceptable results. On the other scale of the spectrum, there's people with huge spreadsheets that take considerable time for a single color calculation. My own needs have always been somewhat in between these, as I need a solid and especially internally consistent theoretical model, very pleasing results, but on the other hand also practical implementations that are reasonably fast. There's furthermore additional reasons why my thinking in these matters is far from mainstream, and likely not to be trusted, like eg I work from the assumption that a user intuitively understands 'negative', but doesn't see why it should be different if applied in different color spaces, nor should the user understand the notion of color space at all really.

So I think many people will disagree with me on many of my viewpoints on color. Many will totally contest my high regard for Lab in the first place. And I certainly can't give you a short list of good sources for documentation. I am not a color engineer by any standard out there.

But on the other hand I do think many people that see the results of the operations I termed 'Brightness inversion' and 'Negative' will agree the results are both pleasing and fast.

But if you're just looking for info on Lab unrelated to the rest of my ramblings, it's really quite basic. There's two important points to make.

  1. It's basically an experimental attempt to build a colorspace that is visually uniform, i.e. eucledian distance in Lab can be taken as a measure of percieved color difference by an average human viewer. (Accurace is a point of endless, and I mean endless debate, but any alternative is computationally many orders more expensive so that issue is moat really.) This is different from all other color spaces. For instance, RGB reflects cathode ray tube voltages, which is very different from human vision. CMYK reflect ink mixes, which is again very different. This issue is related to 'validity' in the scientific sense, i.e. an algorithm that is aimed at human vision but is applied to RGB or CMYK instead, is just not a valid implementation. The importance of this issue is related to the nature of the algorithm. For instance, in interpolation between neighbouring pixels, the difference in color between those pixels is usually small, and thus so is the error caused by the invalid application of this in RGB, and thus this issue isn't that important. In other algorithms the calculations often involve very different colors, and the issue becomes much more pressing.
  2. The only other thing that matters is how to calculate... For this, I think I best refer to Bruce Lindbloom's site (http://www.brucelindbloom.com/). There's many places on the web where you can find similar formulaes, but some aren't totally correct. I must add though, each time I re-implement these formulae, I find myself struggling for a little time, and that certainly isn't only related to optimisation. It may be that I'm not very bright, or otherwise it's not completely trivial.

Best regards,

Joris Van Damme
info@awaresystems.be
http://www.awaresystems.be/
Download your free TIFF tag viewer for windows here:
http://www.awaresystems.be/imaging/tiff/astifftagviewer.html