2001.03.28 14:50 "Bit depth", by Andrew Jarvis

2001.03.29 18:23 "Re: Bit depth", by Andreas R. Kleinert

Max value of 11 bits: 2047 = 0x7FF, if you apply a left shif of 5 you got 65504=0xFFe0, while you should obtain maximum 16 bit value: 0xFFFF

This is a frequent mistake when converting from 8 to 16 bits, many implementations I've seen does simply shift left 8, while they should multiply by 257

One also could do:

    in8   = 2047;
    out16 = (in8 << 5) | (in8 >> 6);

This is equal to 2047*32 + 2047/64 and gives 65535 as well.

Is not just useful in this particular case, but also, when e.g. converting 4 bit per gun colors into 8 bit per gun colors (for example in 12 bit truecolor TIFFs or in old 4 bit colormapped palettes). In this case it would be:

    in8  = 0x0F;         // 4:8 bits, max. 15
    out8 = in8<<4 | in8;

Thus, in the 12 bit per sample case, I would do the following:

    in_red8   = upper4bit(  in12 );
    in_green8 = middle4bit( in12 );
    in_blue8  = lower4bit(  in12 );

    out_red8   = in_red8  << 4 | in_red8;
    out_green8 = in_green8<< 4 | in_green8;
    out_blue8  = in_blue8 << 4 | in_blue8;

...which gives a linear upscaling of the color components of the original 12 bit "truecolor" image.

After that, apply downscaling to 16 bit, if really necessary. Or merge both into one step, if unavoidable :)

If I'm not mistaken, 12 bit TIFFs are really valid (don't remember, whether they did contain triplets of byte nibbles or 16 bit values with only 12 bits used).

But why not just convert the 10/12 bit values and store them as real 16/24 bit ones. Most readers won't be able to handle anything except 1,4,8,15/16,24.

Andreas_Kleinert@t-online.de  | http://www.ar-kleinert.de             |
Freelance Consultant & Writer | Software Engineering                  |
 *** PerSuaSiVe SoftWorX ***  | x86 Win/Linux, 68k/PPC Amiga and more |