2007.04.16 11:45 "[Tiff] TIFFReadRGBAImage and palette images", by mikk

2007.04.16 11:45 "[Tiff] TIFFReadRGBAImage and palette images", by mikk

Hello all,

I have a TIFF image with PhotometricInterpretation = Palette, BitsPerSample = 8, SamplesPerPixel = 1.

I load the image into a 32 bit pixel buffer with TIFFReadRGBAImage. Before loading the image I built an inverse color lookup table from palette, where the table keys are 32 bit color values built from R, G, B components translated to 8 bit, and the table values are palette indices. I use the formula of v8Bit = ((v16Bit) * 255) / ((1L<<16)-1).

After editing without changing color space I want to save the image from the pixel buffer. I'm trying to avoid looking for nearest palette index by minimal Euclidian distance for performance reasons (and there is no need, because colors have not changed). So, I iterate through the 32 bit pixel buffer, read the color value and look for the palette index in the inverse lookup table to save it in a strip buffer. Unfortunately, looking up for palette index based on key from pixel color components doesn't return valid palette index. It seems that I use different conversion from 16 bit palette to build lookup key value than the TIFFReadRGBAImage uses to represent pixels.

After short investigation I found that "translation" of the 16 bit palette is done in tif_getimage.c.

What transformation from 16-bit palette entries to 8-bit palette entries is done when TIFFReadRGBAImage is called?

In tif_getimage.c (libtiff 3.8.2) I can see:

static void
cvtcmap(TIFFRGBAImage* img)
{

    uint16* r = img->redcmap;
    uint16* g = img->greencmap;
    uint16* b = img->bluecmap;
    long i;

    for (i = (1L<<img->bitspersample)-1; i >= 0; i--) {

#define    CVT(x)        ((uint16)((x)>>8)) // <---------- !!! LOOK HERE !!!

    r[i] = CVT(r[i]);
    g[i] = CVT(g[i]);
    b[i] = CVT(b[i]);
#undef CVT
    }
}

Does it mean that plain bit shifting for values of color components in palette is done when TIFFReadRGBAImage is called? Is this done intentionally? I've seen some discussions in this mailing list with conclusions that it shouldn't be done by bit shifting because of error distribution. For example, in pal2rgb tool the conversion macro used to translate palette from 16 bit to 8 bit is defined as mine and is:

#define    CVT(x)        (((x) * 255) / ((1L<<16)-1))

What should I think about it?

Kind regards,

mikk