1993.08.17 18:19 "Re: byte swapping 16- and 32-bit data", by Sam Leffler
I believe that I'm the one that brought up the problem.
Yes Dan, but I was trying to allow you to remain anonymous :-).
I think the reason that this has not come up before is that there are currently very few people who do both of:
- use bitspersample greater than 8
- transfer files between big-endian (sgi,mac,sun) and little-endian (pc,dec) machines.
I predict that there will be many more people doing both of these in the future.
Yes, what with DEC Alpha.
Our products ship on macs, pcs and 6 different workstations.
We have tried the above and found that it is a problem.
As Sam stated, the library currently handles this problem transparently for uncompressed data, but not for compressed data.
This is clearly not right, it is inconsistent.
What sam is suggesting is removing the byte-order independence from the uncompressed code.
I think this is the wrong fix.
Adding byte order independence to all compression mode incurs NO extra overhead in the normal case. The extra expense of byte swapping will only be paid when the image file has been transferred to an opposite byte-order machine, such as mac to pc.
I see little problem in asking application writers that want to deal with the case of >8bit data to wrap their calls to read data with code of the form:
For one, 90% of the applications will develop and test on a single platform and get it wrong.
We've got dozens of programs that read tiff files. 99% of the time, they read files that were written on the same platform as they are being read on.
I don't want to spread the machine dependent code through all of those applications to handle the last 1%. It makes much more sense, and is much more "orthogonal", to just make the library handle the data the same way it handles the tags, in a machine independent way.
I would vote for making all the compression routines handle >8 bits per sample data the way that the uncompressed routines currently do.
I don't really have the time to make all the compression routines do the byte swapping efficiently. If I put byte swapping in the library then it will go after the decoding is done which will mean an extra pass over the data. I can optimize this extra pass by rolling the byte swap in with a copy operation in certain cases, but not all. Many consumers of the data will then pay the expense of the extra pass when they could have rolled it into some other operation, or perhaps totally avoided it (Craig Hockenberry pointed out that the X window system for example has protocol support for specifying the byte ordering of data which could be used to completely avoid any byte swapping when going to the display).
I've gone back and forth many times about how much support to put in the library for byte swapping, color space conversion, bit packing and unpacking, etc. My decisions in the past have always been driven by a desire to make a library that developers would want to use and not avoid. There is a lot of functionality that is not in the library, but which belongs in a higher-level library layered on top. The TIFFReadRGBAImage function is included in the library not because I expect people to use it but because I expect them to rip it out and modify it to create such a library.
I understand the argument about folks not getting this right because they don't test cross-platform portability. I'm very concerned about that--it's one of the main reasons that I wrote the library in the first place.