2005.09.12 17:12 "[Tiff] extracting 16-bit grayscale from photoshop", by Hoan Chau

2005.09.12 19:12 "Re: [Tiff] extracting 16-bit grayscale from photoshop", by Chris Cox

Inside Photoshop, 16 bit values (as explained in your manual) are 0..32768. In TIFF, 16 bit values are 0..65535.

And your first RGB readout in Photoshop was set to 8 bit, which is converting the 16 bit value to an 8 bit value.

0x145e = 5214 [0..65535] = 2607 [0..32768] = 20 [0..255]

3726 [0..32768] = 7452 [0..65535]

29 [0..255] = 7453 [0..65535]

So, not only are you looking at the numbers in different scales, but you also made a mistake in measuring the pixel value inside Photoshop.


At 10:12 AM -0700 9/12/05, Hoan Chau wrote: >content-class: urn:content-classes:message >Content-Type: multipart/alternative;

> boundary="----_=_NextPart_001_01C5B7BD.1A604157"

I have a 16-bit grayscale image that when read by TiffReadScanline identifies the first byte for the first pixel as 0x14 = 20; and the second byte as 0x5e = 94. Compared to another program called PolyView (just to check my sanity), the first byte is indeed a value of 20. However, when i go to view the image in Photoshop, I have completely different information displayed: pixel value = 29 = r = g = b. Furthermore, if I view it as a 16-bit value it is 3,726.

Thus, I don't understand how the data can be interpreted one way in Photoshop and in another way by TiffReadScanline.

Is there a tag/mask that Photoshop uses which converts each two bytes per pixel to its 16-bit value (e.g. 0x145e to 3,726)?