2000.10.20 18:20 "Re: reading LAB files", by Joris Van Damme
I seem to be having trouble reading LAB files.
I have an LAB file where Photshop show the first pixel as (55,22,-10).
When I read it into an unsigned char array with ReadScanline, I get (140, 22, 146) which maps to (-116, 22, -10) when typecast to signed chars.
So, what's up with the -116? Does libtiff do something special to LAB files? I grep'd the code, but didn't seem to find anything.
Lab TIFF are encoded using
L: 0...FF -> 0..100
a,b: 0..FF -> -128...127
So, you need to divide L by 2.55 and substract 128 to a,b parts to get float values.
I'm sorry but this is not correct for my testfiles. I substract 256 from the a,b parts IF there bit 8 is set, otherwise I leave them alone. I have tested that. Also, the example Michael is giving here seems to support my findings.
Is it possible that there are different encodings of tiff lab around? Or is it simply your mistake?
Also, TIFF lab does use D65 as white point (as opposed to D50, widely used on prepress). Note that this encoding is very usefull for simpler operations, and converting to float is very time consuming. If you want the float values to pass to XYZ and then to RGB by float formulae, you shurely will find this is a neverending process.
The only fast way I've found to convert Lab to RGB and the other way around is using a LUT of about 2 megs big. And that reduces accuracy to about 6 bits per channel, which is fine for fast results, but I do want 16 bits per channel accuracy in the 'perfect' conversion routines. If you know a better way to do either the fast or the perfect conversion, I'd LOVE (!!!) to hear more about it.