2006.10.03 13:09 "Re: [Tiff] Status of ISO JBIG and CIELAB JPEG support", by Joris Van Damme
I agree and its for parts a result, when trying to add some general support for lab/jpeg/tiff files, of dealing with specs, some "state of art" of libtiff 3.7.3 ( don't know if things have changed ), tifftools ( mainly tiffcp ), photoshop, and... my own user's requests. ;-)
I think we've some communication problems going on... Let me again attempt to be clear on one thing at least: most of these files are perfectly OK. In fact, it's a surprise to see such good work and eye for detail in an area where the producer of the files needs to put a lot of specs and tools together, and that still is vague to some degree (as for example when Subsampling mixes with Lab - some people say that Subsampling simply doesn't apply to Lab, but then there's RFC 2301 (IIRC) saying otherwise, some other people have written ITULab files with Subsampling and we need to be able to at least read these... So you can't escape reality with an easy decision, this is a mess no matter what).
For what I remember, ( I have done this job in 06/2005), a problem for example, rely on default 2,2 subsampling ( when you enter "wide view" of subsampling).
NB: photoshop handle jpeg/cielab ( with progressive in jpegtiff ?) and when saving an uncompressed cielab file, do not write a subsampling tag 1,1...
As far as this problem is concerned, the way my decoder handles this is the following:
- if colorspace is YCbCr
- if Subsampling tag is not present -> assume TIFF 6.0 default [2,2] applies
-> assume Subsampling tag is correct
- else if colorspace is CIE, ICC or ITU Lab
- if Subsampling tag is not present -> assume writer thinks subsampling doesn't apply, i.e. assume
-> assume writer thinks subsampling applies, i.e. that this tag
-> assume no subsampling, regardless of tag presence
Next, if compression scheme can support deriving Subsampled values from the compressed data, as does for instance JPEG and OJPEG compression, I allow this to correct the values. This last bit is expense, well, not really, but it's expensive overhead compared to the simple reading of a tag anyway. The good news is that it's not really that necessary, above scheme without this last bit will yield correct values (as in writer intention that corresponds to the actual data in there) 99% of the time. With this last bit in place, I arrive at something like 99.9%. (All percentage are a figure of speech to give you some idea - I'm not able to measure since it's impossible to guarantee any set of testfiles is representative for the hole of the files out there.)
I've not yet made a definite choice in the encoder, though. So I for one am hoping someone with authority tries to clean up this confusion in a proper manner. (For example by allowing encoders to either disregard or apply subsampling to CIE, ICC, and ITU Lab, at will, by making the default for this tag be [1,1] as far as these Photometrics is concerned. Messy to have the default depend on the value of another tag, but it may very well be the only way out, and it certainly is consistent with the files out there already.)