Skip to main content

Glossary

Collection of terms and definitions commonly encountered in fuzzing.

Fuzzer/Fuzzing Engine

A piece of software that generates input to feed into the system under test (SUT) via the Fuzz Tests. A coverage-guided fuzzer also leverages instrumentation to gather rich runtime signals including code coverage during the execution and optimize test case generation to maximize code coverage.

Fuzz Test/Fuzz Target

A test harness that invokes the system under test with fuzzer-generated input. Usually, it's a function that takes fuzzer-generated inputs as arguments (e.g., a byte buffer and corresponding size for libFuzzer targets) and calls functions from the system under test.

Fuzzing

Fuzzing is the act of executing the Fuzz Test against the system under test via the fuzzing engine. More broadly, Fuzzing is the process of using a fuzzing engine to generate arbitrary inputs that are fed to the system under test via the Fuzz Test. The behavior of the system under test is observed for each input to detect unexpected or undesired results such as crashes.

Corpus

A saved set of inputs that can be used by the Fuzz Test to trigger execution of code paths in the system under test.

Seed corpus

A small corpus that's usually provided by the user. A seed corpus can be used to aid the fuzzing engine at more quickly reach new code paths in the system under test. The fuzzing engine can then mutate on these initial seeds to discover more novel code paths.

See https://llvm.org/docs/LibFuzzer.html#corpus.

Generated corpus

The generated corpus is a set of inputs that triggered unique code paths during fuzzing. The fuzzing engine saves this corpus which it utilizes in subsequent fuzzing runs.

Replayer

A replayer runs a Fuzz Target using only input saved from previously triggered findings. This differs from a normal fuzzing run since the fuzzing engine won't be used to generate new novel inputs nor will the standard corpus be used as a source of inputs. This can be useful for debugging Fuzz Tests and performing regression testing for previously identified findings.

Sanitizer

Sanitizers are tools that add instrumentation to the system under test by being linked to the SUT at compile time. The instrumentation provided by these Sanitizers act as run-time bug detectors for issues such as buffer overflow, signed integer overflow, uninitialized memory read, etc. Please refer to the google documentation on sanitizers for more information.

Finding

A Finding describes a bug or vulnerability found by the Fuzzer. CI Fuzz and CI Sense provide details on the bug type and include the input that triggered the bug and the generated stacktrace.

Coverage

Coverage describes the code reached (and therefore executed) during an application/fuzzing run. It can be measured in different categories, for example:

  • Lines
  • Functions
  • Branches