-
2004.10.01 13:34 "Re: [Tiff] quad-tile", by Frank Warmerdam
-
2004.10.01 13:49 "Re: [Tiff] quad-tile", by Joris Van Damme
-
2004.10.01 13:57 "Re: [Tiff] quad-tile", by Frank Warmerdam
-
2004.10.01 14:17 "Re: [Tiff] quad-tile", by Joris Van Damme
- 2004.10.01 14:24 "Re: [Tiff] quad-tile", by Frank Warmerdam
- 2004.10.01 14:56 "Re: [Tiff] quad-tile", by Bob Friesenhahn
- 2004.10.02 14:16 "Re: [Tiff] quad-tile", by Andrey Kiselev
-
2004.10.01 14:17 "Re: [Tiff] quad-tile", by Joris Van Damme
-
2004.10.01 13:57 "Re: [Tiff] quad-tile", by Frank Warmerdam
-
2004.10.01 13:49 "Re: [Tiff] quad-tile", by Joris Van Damme
- 2004.10.03 16:01 "Re: [Tiff] quad-tile", by Frank Warmerdam
2004.10.02 15:37 "Re: [Tiff] quad-tile", by Andrey Kiselev
Hi Joris,
The test suite I'm working on right now doesn't use the external image set. All test images are generated by the test cases from the internal data arrays and being compared with that images after reading the files back. Any help will be greatly appreciated.
I'm not completely sure I understand. Do you mean sortoff closed-circuit testing, like this
- for all encoding schemes/parameters
- generating some testimage
- encoding it (to a temp file), with some encoding scheme/parameters
- decoding it (from a temp file)
- comparing it to original testimage
- if same, then library ok for this encoding scheme/parameters
Exactly, with except of one thing: my test code is very basic for now and word "all" is not yet acceptable. :-) But even so small test suite already helps me to found several bugs/oddities in the library.
I guess you probably are, 'cause I can see how that can be an extremely nice auto test tool in the auto make schemes you're concerned with. Am I right?
You are absolutely right, there is a built-in support for such tests in the autotools environment. We can run the tests after every change in the library code to ensure that nothing is broken.
If so, I think we might best be able to help each other by being somewhat complementary. The thing I am most concerned with, is that very first line in this scheme, the enumeration of all issues/schemes/parameters/whatever that set TIFF files apart, and that makes that some work with some decoders, and others don't. I'd like to enumerate that, think about all issues, document it, link it with the tag pages and vice versa, etc.
When I say 'enumeration', I am, as always, thinking raw data in any proprietary scheme, managed by on-the-fly written code, from which my code can assemble and maintain the HTML presentation of this data. Perhaps we ought to look at ways to integrate our schemes, having the data drive your closed-circuit loop. That would yield
- very extensive closed-circuit libtiff testing
- testimage suite generation by libtiff
- the html index pages into this testimage suite, crossreferenced with tag pages etc
Mmm... I think I don't understand in details what do you mean. Do you mean integrating the internal libtiff tests with the external data sets?
Two minor complications I foresee:
- Closed-circuit testing is a tiny bit problematic when lossy storage is involved. I'm not just thinking jpeg, even mere YCbCr subsampled images, whatever compression scheme, are lossy. This tiny problem can perhaps be solved by having the data include tolerance levels for the closed-circuit testing?
Yes, that is a problem I don't thinking too much. A lot of code should be written before this problem becomes valuable and I'm sure some good idea comes to us at some point.
- I'm planning also stuff that LibTiff can't handle, most notably different bitdepths and different datatypes per channel. This tiny problem can perhaps be solved by having the data include a 'skip in LibTiff generation/closed-circuit-testing' flag?
Well, that is not a stuff for the test suite, it should go to Bugzilla ;-)
Andrey
Andrey V. Kiselev
Home phone: +7 812 5274898 ICQ# 26871517