TY - GEN
T1 - Snappy: Efficient Fuzzing with Adaptive and Mutable Snapshots
AU - Geretto, E.
AU - Giuffrida, C.
AU - Bos, H.
AU - Van Der Kouwe, E.
PY - 2022/12
Y1 - 2022/12
N2 - Modern coverage-oriented fuzzers play a crucial role in vulnerability finding. While much research focuses on improving the core fuzzing techniques, some fundamental speed bottlenecks, such as the redundant computations incurred by re-executing the target for every input, remain. Prior solutions mitigate the impact of redundant computations by instead fuzzing a program snapshot, such as the one placed by a fork server at the program entry point or generalizations for annotated APIs, drivers, networked servers, etc. Such snapshots are static and, as such, cannot adapt to the characteristics of the target and the input, missing opportunities to further reduce redundancy and improve fuzzing speed. In this paper, we present Snappy, a new approach to speed up fuzzing by aggressively pruning redundant computations with adaptive and mutable snapshots. The key ideas are to: (i) push the snapshot as deep in the target execution as possible and also end its execution as early as possible, according to how the target processes the relevant input data (adaptive placement); (ii) for each identified placement, cache snapshots across different inputs by patching the snapshot just-in-time with the relevant input data (mutable restore). We propose a generic design applicable to both branch-agnostic and branch-guided input mutation operators and demonstrate Snappy on top of Angora (supporting both classes of operators). Our evaluation shows that, while general, Snappy scores gains even compared to a fork server with hand-optimized static placement such as in FuzzBench, for instance obtaining up to ≈ 1.8x speedups across benchmarks.
AB - Modern coverage-oriented fuzzers play a crucial role in vulnerability finding. While much research focuses on improving the core fuzzing techniques, some fundamental speed bottlenecks, such as the redundant computations incurred by re-executing the target for every input, remain. Prior solutions mitigate the impact of redundant computations by instead fuzzing a program snapshot, such as the one placed by a fork server at the program entry point or generalizations for annotated APIs, drivers, networked servers, etc. Such snapshots are static and, as such, cannot adapt to the characteristics of the target and the input, missing opportunities to further reduce redundancy and improve fuzzing speed. In this paper, we present Snappy, a new approach to speed up fuzzing by aggressively pruning redundant computations with adaptive and mutable snapshots. The key ideas are to: (i) push the snapshot as deep in the target execution as possible and also end its execution as early as possible, according to how the target processes the relevant input data (adaptive placement); (ii) for each identified placement, cache snapshots across different inputs by patching the snapshot just-in-time with the relevant input data (mutable restore). We propose a generic design applicable to both branch-agnostic and branch-guided input mutation operators and demonstrate Snappy on top of Angora (supporting both classes of operators). Our evaluation shows that, while general, Snappy scores gains even compared to a fork server with hand-optimized static placement such as in FuzzBench, for instance obtaining up to ≈ 1.8x speedups across benchmarks.
UR - http://www.scopus.com/inward/record.url?scp=85144067057&partnerID=8YFLogxK
U2 - 10.1145/3564625.3564639
DO - 10.1145/3564625.3564639
M3 - Conference contribution
T3 - ACM International Conference Proceeding Series
SP - 375
EP - 387
BT - ACSAC '22
PB - Association for Computing Machinery
T2 - 38th Annual Computer Security Applications Conference, ACSAC 2022
Y2 - 5 December 2022 through 9 December 2022
ER -