Once the fuzzing started, the dashboard will provide you with live monitoring of the fuzzing:
Each fuzzing run will be assigned a random name (amazing_clarke in this example) in order to make it easier to associate findings with fuzzing runs.
From left to right, the dashboard displays three different metrics.
The leftmost metric is the total number of code blocks, edges and additional metrics covered by executing the current corpus. Our fuzz engines use different metrics to evaluate the code coverage: edge coverage, edge counters, value profiles, indirect caller/callee pairs, equal bytes, etc.
The graph in the middle displays the performance over time. Fuzzers will start fast, with many executions per second. As the size of the random inputs that are used for testing increases over time, the duration per execution will increase too. This leads to lower performance for long running fuzz tests. A sudden decrease in performance can also indicate bugs like endless loops or memory exhaustion.