2013.04.16 09:55 "[Tiff] How 16bits-RGBA image pixels are interlaced ?", by Rémy_Abergel

2013.04.17 18:18 "Re: [Tiff] How 16bits-RGBA image pixels are interlaced ?", by Chris Cox

Yes, using the strip and tile methods is preferable, but means that you will have to manage some of the strip and tile logic outside those calls (which should be easy if you are familiar with file formats or image processing).


On 4/17/13 7:48 AM, "Rémy Abergel" <remy.abergel@parisdescartes.fr> wrote:

 Hello Chris,

Thank you for your answer, indeed I made a confusion between libtiff and the TIFF format itself.

I have been reading more about TIFF format and I understand now your comment.

Until now I have been working with libtiff (http://www.remotesensing.org/libtiff/libtiff.html), I don't want to use higher level API because I don't want to introduce dependancies in my code.

It seems that to be able to read all kind of TIFF would be to use TIFFReadEncodedStrip and TIFFReadEncodedTile functions because TIFFReadRGBAImage is limited to images having 8-bits per sample.

Does it seems correct to you?

Thank you again for your comments,


Le 16/04/2013 19:41, Chris Cox a écrit:

Re: [Tiff] How 16bits-RGBA image pixels are interlaced? TIFF supports interleaved and planar storage of pixel values.

What you are describing is just one high level API for accessing the values in the TIFF file.

Extracting the values from a uint32 would be byte order dependent (varies between CPUs).

The TIFF specification does spell out how 16 bit/channel values are stored in the file format.

But how they are retrieved from LibTIFF depends on which API you are using.


On 4/16/13 2:55 AM, "Rémy Abergel" <remy.abergel@parisdescartes.fr> wrote:

I'm trying to deal with 16-bits per channel RGBA TIFF images through C language, I could not find a lot of information about 16-bits images in the specifications.

In case of a 8-bits per channel RGBA image, I understand that a pixel is stored as a uint32, and can be deinterlaced by grouping the 32 bits into 4 groups (R,G,B,A) of 8 bits.

Then to deal with 8-bits per channel RGBA images, I'm doing the following (see also enclosed source code):

  1. I store the image data as a uint32 tab (using TIFFReadRGBAImageOriented) that I call data_tiff
  2. I deinterlace pixels using the following commands: (uint8) TIFFGetG(*data_tiff), (uint8) (uint8) TIFFGetA(*data_tiff) TIFFGetR(*data_tiff), (uint8) TIFFGetB(*data_tiff) &

In case of a 16 bits per channel RGBA image, could you tell me how can I deinterlace pixels?

if I could retreive image data as a uint64 tab, then I could do the following:

#define    TIFF16GetR(abgr)    ((abgr) & 0xffff)
#define    TIFF16GetG(abgr)    (((abgr) >> 16) & 0xffff)
#define    TIFF16GetB(abgr)    (((abgr) >> 32) & 0xffff)
#define    TIFF16GetA(abgr)    (((abgr) >> 48) & 0xffff)

  1. I read the image data as a uint64 tab
  2. I deinterlace pixels using (uint16) TIFF16GetG(*data_tiff), (uint16) (uint16) TIFF16GetA(*data_tiff) TIFF16GetR(*data_tiff), (uint16) TIFF16GetB(*data_tiff) &

but it seems that data are not natively stored in a uint64 tab, so I wonder how are interlaced 16-bits per channel images into a uint32 pixel tab.

I'm also facing difficulties dealing with grayscaled in the same way (using TIFFReadRGBAImageOriented to get image data and trying to convert each pixel into a uint16) 16-bits images

 More generally, do you have any piece of documentation about 16 bits grayscale and color images?