TIFF and LibTiff Mail List Archive


1997.01.30 23:32 "write subset", by Thomas Loecherbach
1997.01.31 07:30 "Re: write subset", by Kyriakos Georgiou
1997.01.31 07:51 "Re: write subset", by Rainer Wiesenfarth
1997.01.31 13:39 "Re: write subset", by Kyriakos Georgiou
1997.02.03 09:16 "gigamem", by Dr. Klaus Bartz
1997.01.31 16:07 "Re: write subset", by Sam Leffler
1997.01.31 16:13 "Re: write subset", by Gary Burgess
1997.01.31 17:01 "Re: write subset", by Dr. Klaus Bartz

1997.01.31 17:01 "Re: write subset", by Dr. Klaus Bartz

Hello Rainer,

first - for a better understanding of my comments - a short introduction of my association to this mailing list.

I am not using tiflib for standard working; I have written an own lib some years ago for bilevel images.

I use the tiflib utilities as a reference ( if someone sends me an image wich my lib cannot read, I use the tiflib utilities and said to the sender "its not a TIFF-File as specified at Revision 6.0" ( or I fix my bug)).

But in future, I think, I will use "she" (it), because I cannot handle color images ( my "working base" is a "half decompressed" image line (sequence of four byte addresses of color change points referenced to line start) which is good for transformation of black and white images, but not useable for color or dithered images ). I think, I should change then something, but what I wont? There are other libs for which I must pay some ( or more ) money and there support is not the best ( or they moves my mail into wc00 ). Then I am flying in that binary.

Why do the members of this list always assume that an image would fit completely in main memory? What if you use images with a minimum size of 70MB (grayscale) or 210MB (RGB) (like we do)?

Not always completely... we have stripes and tiles and we can compress the images.

The limit of TIFF images is given by the 32-Bit offsets used. This allows images of at least 2GB in size.

2GB: (un)compressed data and IFD's.

My synthetical "crash image" ( BW CCITT T.6 ) has a virtuell bitmap size of 50 giga pixel, my biggest real scan has 396 mega pixel ( do you have a real BW image bigger than?? please send me). They are showable and transformable with 32 MB RAM by a process size of ~35 MB (HP_UX). On WINDOWS 3.1 via network path only ~7 giga pixel are possible. All with read into memory ( not the hole image at one time, only all tiles of a line), decoding, transformation, encoding and write to file.

With 7 giga pixel slowly, I know.

It is a bad approach to copy the whole file when applying changes.


Sometimes I am dreaming from better handling:

Direct access to the binary data... no, it is only possible for raw data or?.. synthetical reentrence table for the CCITT T.6 code... nice ...or... overwrite tiles which are shorter after process and append bigger tiles to file end and change only the entry in the ByteOffset table referenced by tag 324... why not...

free list... yes that is it... what is with tag 288 and 289? Why said Rev. 6.0 nothing to it? ( but Rev. 5.0 said "no longer recommended"). binary block copy... yes...

How to implement. tiflib has a common interface to internal or external decoders and encoders. Do tiflib know whether random access to binary data is possible or not? Or should all builtin codecs support this ( hello Mr. Gailly :) )? How to (error) handle different behavior?

And so on.

If you want to update this kind of images, you have to 'program around' libtiff. In case you use uncompressed images, you can modify images by getting the TIFF fields StripOffsets or TileOffsets and do the reading/writing by hand.

This approach is not very handy, but - as far as I know - the only solution when you want to use libtiff.

Ok. That is a way for one expression of this "totally wishy washy" spec. And then you want archive the file. Really uncompressed? If yes, why to use this complicated file format. A little own file/page header and then the raw data. If not, you must convert the complete image.

Why not get a strip or tile (or all tiles of a line) from the decoder, modify it and give the changed strip or tile to an encoder which compress the data.

A tile or a "tile line" should then fit into main memory, not the completely image.

BTW, I would be very happy if Sam would introduce random access to libtiff. But I also know that this is not that easy (simply think of compressed images).

I am too and I am very interesting for the code. Now I am happy that there is an alternative to LZW ( where is the draft of 7.0 wich defines zip as public?? ). And I would be very happy if the patent problem of JBIG will be evaporate ( nice for BW and 4 bit grayscale, hard to implement).

Greetings Klaus

Dr. Klaus Bartz                      
COI GmbH / Abt. KOU3    Industriestr. 1-3    D-91072 Herzogenaurach
Tel: +49 (9132) 82-3433                        Fax: +49-9132-824959