Collection of terms and definitions commonly encountered in fuzzing.
Fuzzer (Fuzzing Engine)
Is a piece of software that generates input to feed into the system under test (SUT) via the fuzz tests. A coverage-guided fuzzer also leverages instrumentation to gather rich runtime signals including code coverage during the execution and optimize test case generation to maximize code coverage.
Fuzz Test (Fuzz Target)
A test harness that invokes the system under test with fuzzer-generated input. Usually, it's a function that takes fuzzer-generated inputs as arguments (e.g., a byte buffer and corresponding size for libFuzzer targets) and calls functions from the system under test.
Fuzzing is the act of executing the fuzz test against the system under test via the fuzzing engine. More broadly, fuzzing is the process of using a fuzzing engine to generate arbitrary inputs that are fed to the system under test via the fuzz test. The behavior of the system under test is observed for each input to detect unexpected or undesired results such as crashes.
A saved set of inputs that can be used by the fuzz test to trigger execution of code paths in the system under test.
A small corpus that's usually provided by the user. A seed corpus can be used to aid the fuzzing engine at more quickly reach new code paths in the system under test. The fuzzing engine can then mutate on these initial seeds to discover more novel code paths. See https://llvm.org/docs/LibFuzzer.html#corpus.
The generated corpus is a set of inputs that triggered unique code paths during fuzzing. The fuzzing engine saves this corpus which it utilizes in subsequent fuzzing runs.
A replayer runs a fuzz target using only input saved from previously triggered findings. This differs from a normal fuzzing run since the fuzzing engine won't be used to generate new novel inputs nor will the standard corpus be used as a source of inputs. This can be useful for debugging fuzz tests and performing regression testing for previously identified findings.
Sanitizers are tools that add instrumentation to the system under test by being linked to the SUT at compile time. The instrumentation provided by these Sanitizers act as run-time bug detectors for issues such as buffer overflow, signed integer overflow, uninitialized memory read and etc. Please refer to the google documentation on sanitizers for more information.
A finding describes a bug or vulnerability found by the fuzzer. CIFuzz and CI Sense provide details on the bug type and includes the input that triggered the bug and the generated stacktrace.
During a fuzzing run, cifuzz provides various metrics to inform the user of fuzzing progress and coverage.
The average number of times per second the fuzz target has been executed by the fuzzing engine.
While the fuzzer feeds generated inputs to a fuzz test, it progressively explores the code under test. This is alternatively known as "edge coverage." This metric captures the different kinds of progress the fuzzer can make, such as:
- reaching new lines of code
- executing a loop body a different number of times
- executing certain lines of code a different number of times (E.g. recursion)
While this number increasing can indicate that the fuzzer is still making progress, it's not meaningful to compare across different fuzz tests. This is because the edges reachable by individual fuzz tests can vary widely from test-to-test.
For the libFuzzer engine, the "paths" metric coincides with the engine's "ft" (feature) count.
Last new path
The time that has passed since the last increase of the "paths" metric.
If this indicator keeps increasing, it's likely that the fuzzer isn't
making progress anymore. In this case, use
cifuzz coverage to
get an idea of where the fuzzer got stuck.
Coverage describes the code reached (and therefore executed) during an application/fuzzing run. It can be measured in different categories, for example: