2001.10.05 22:22 "16-bit ICC L*a*b* ?", by Dwight Kelly

2001.10.25 21:53 "Re: 16-bit ICC L*a*b* ?", by Joris Van Damme

Lab TIFF of 8 bits are encoded using:

L: 0...FF -> 0..100
a,b: 0..FF -> -128...127

To decode to float, you need to divide L by 2.55. For the a, b parts, substract 128 if greater that 127.

I'm sorry, but this must be a typo or something. If you substract 128 if >127, you end up with two ways to encode the range 0 to 127, and no way to encode negative values.

This is the way I currently interpret 24bit lab in tiff. Integer source values in 24bit tiff lab are al,aa,ab; floating point destination values in CIE L*a*b* are bl,ba,bb; no floating point calculation optimization for clearity.

procedure XPVtifflab24ToVlabf(const al,aa,ab: Integer; var bl,ba,bb: Cft);
  if aa<128 then
  if ab<128 then

I don't know if this is correct, I can only judge this with my eyes so it is very possible that constants here should actually be one more or one less or something. Your comments would be much appreciated.

The 16 bit form uses fixed point 7.8 for a, b. This is a rare case, where simple shifting << 8 and >> 8 can be used to convert between 8/16 representations.

Since you give the convertion to 24bit tiff lab here, I'm not able to interpret it either. Based on what you say here about shifting, one single testimage, and my eyes as final judge of correctness, this seems to come very close (again, conversion to floating point CIE L*a*b* given here for clearity, again same naming conventions):

procedure XPVtifflab48iToVlabf(const al,aa,ab: Integer; var bl,ba,bb: Cft);

Is this correct? I have only my eyes to judge, and would therefore much appreciate your knowledgeable comments.