2018.04.09 07:29 "[Tiff] fuzzing libtiff with google's oss-fuzz", by Paul Kehrer

2018.04.09 13:44 "Re: [Tiff] fuzzing libtiff with google's oss-fuzz", by Bob Friesenhahn

Thanks for the link, that context is very helpful.

As proposed in that thread, allowing a definition of a runtime global limit (this is the path tools like ImageMagick and GraphicsMagick have chosen) would be a reasonable path forward on the memory issue. This would allow consumers to define their maximum memory consumption a priori and attempted allocations in excess of this would error in a defined manner. I would also be okay with having FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION define a "sane" memory allocation limit solely for the purposes of fuzzing (oss-fuzz constrains memory consumption to 2GiB).

All that said, the current fuzzer works around most OOM scenarios so it's not a critical item for integration at this time.

While Google offers a reward to oss-fuzz integrators, the reward does not necessarily go toward the people doing the work to solve problems.

Oss-fuzz is very good at finding problems. In just two months, oss-fuzz created 148 bug reports against GraphicsMagick (a non-funded project developed by volunteers with unrelated full-time jobs at high-pressure startups) and we have fixed 146 of those issues.

Being a maintainer on the receiving end of oss-fuzz reports will result in being severely hammered and resulting loss of most personal time since oss-fuzz only allows for 90 days before issues are made public. Project maintainers are definitely not in the driver's seat when it comes to oss-fuzz.

While GraphicsMagick does impose some resource limits, they are not very strict. Severely nailing down resource limits would make the software slower since it would require accounting for every memory allocation or reallocation. We have taken the approach to attempt to validate each large memory allocation based on the data available. Libtiff would need to do the same.

"And yes, these are the customers having 20 GB TIFF files (has happened) and 10 GB strips (will happen soon). There is no "sane" limit."

Large TIFF files must be supportable without requiring changes to existing using software. Oss-fuzz places an arbitrary 2GB memory limit (which includes memory used by fuzzing library additions) and a 15 second maximum excution time (which includes additional time required by by ASAN/UBSAN). It is very easy to create TIFF files which will trigger oss-fuzz limits.

The most annoying reports from oss-fuzz are reports about use of uninitialized memory, which in our experience are usually pixel data but are automatically reported by oss-fuzz as being severe security issues. It can be very challenging to avoid uninitialized memory (given strips, tiles, and compression ratios) without incurring the overhead of unnecessary memset()s.

Up to now, libtiff has been fuzzed while under the watchful eye of responsible using software like GDAL which helps defend it against wayward files.

Recently, Even Rouault has been making most of the libtiff security fixes. The rest of us have not been very active.

If sufficient libtiff maintainer time/energy is not immediately available then enrolling in oss-fuzz will result in a great many issues being reported in libtiff and exposed to public view (along with files to cause the problem) which are not yet fixed. This would be harmful to users. There has to be enough volunteer maintainer time/energy to get issues resolved and into a libtiff release in 90 days time. Actually, after a problem is fixed in the Git repository, the issue is made public in just 30 days so there needs to be many releases in order to ensure that there is a release before the issues are made public.

Oss-fuzz is a good thing but it is wise to know what one is getting into.

Bob
--
Bob Friesenhahn
bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/