2018.04.09 07:29 "[Tiff] fuzzing libtiff with google's oss-fuzz", by Paul Kehrer

2018.04.09 08:19 "Re: [Tiff] fuzzing libtiff with google's oss-fuzz", by Paul Kehrer

Thanks for the link, that context is very helpful.

As proposed in that thread, allowing a definition of a runtime global limit (this is the path tools like ImageMagick and GraphicsMagick have chosen) would be a reasonable path forward on the memory issue. This would allow consumers to define their maximum memory consumption a priori and attempted allocations in excess of this would error in a defined manner. I would also be okay with having FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION define a "sane" memory allocation limit solely for the purposes of fuzzing (oss-fuzz constrains memory consumption to 2GiB).

All that said, the current fuzzer works around most OOM scenarios so it's not a critical item for integration at this time.

-Paul

On April 9, 2018 at 4:06:16 PM, Nicolas RUFF (nicolas.ruff@gmail.com) wrote:

libtiff would definitely benefit from an oss-fuzz integration. Please also note that Google offers up to $20,000 for successful integrations: https://opensource.googleblog.com/2017/05/oss-fuzz-five-months-later-and.html

I worked on this last year, and hit the same memory allocation issue. You can read the full thread here: https://www.asmail.be/msg0055569571.html

I don't think we reached a definitive conclusion at that time. The thread ended with those famous last words:

"And yes, these are the customers having 20 GB TIFF files (has happened) and 10 GB strips (will happen soon). There is no "sane" limit."

I kind of disagree (libtiff would crash on 32-bit systems while trying to malloc(10GB)), but I ended up fuzzing libtiff privately.

Retrospectively, I think it might make sense to leverage FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION to fail gracefully on excessively large allocations, only while being fuzzed. https://llvm.org/docs/LibFuzzer.html#fuzzer-friendly-build-mode

Let me know if I can be of any help.

Regards,
Nicolas RUFF

2018-04-09 9:29 GMT+02:00 Paul Kehrer <paul.l.kehrer@gmail.com>:

I've been experimenting with fuzzing libtiff recently and was wondering if there is interest in integrating libtiff into Google's OSS-Fuzz ( https://github.com/google/oss-fuzz)? OSS-Fuzz is a public infrastructure for running continuous fuzzing. The system will automatically fuzz the targets you define, aggregate duplicate bug reports, and file issues with minimal reproducers and stack traces so that project developers can easily verify and fix issues. libtiff appears to be getting some fuzzing by proxy via other projects, but I'd be happy to manage the process of integration and submit PRs so that you can directly receive reports and get better coverage.

The primary challenge at the moment is controlling libtiff's memory allocations (it looks like several previous OSS fuzz related fixes have attempted to address that in various code paths?). It is very easy to craft files that will cause extremely large malloc and I've been unable to find a library level way to constrain that. However, I've modified my existing fuzzer a bit to throw out most test cases that would trigger OOMs for now (at least all the ones I was able to generate locally). Ideally in the longer term those would be removed as libtiff gets better at avoiding large allocations on invalid files. Additionally, at the moment the fuzzer I wrote only uses TIFFReadRGBAImage -- do people have suggestions for other functions that might be worthwhile to fuzz? You can see the current diff adding libtiff to the oss-fuzz repo (this includes building libz and libjpeg-turbo as well as compiling the fuzzer) here:

> https://github.com/google/oss-fuzz/compare/master...

> reaperhulk:libtiff?expand=1

If there's interest I just need to know who to put as the primary project email contact and any other people you want to grant access. Those email addresses will need to be associated with a Google account so that you can log into oss-fuzz.com to view the reports.

-Paul Kehrer (reaperhulk)