23 research outputs found
Semantic Fuzzing with Zest
Programs expecting structured inputs often consist of both a syntactic
analysis stage, which parses raw input, and a semantic analysis stage, which
conducts checks on the parsed input and executes the core logic of the program.
Generator-based testing tools in the lineage of QuickCheck are a promising way
to generate random syntactically valid test inputs for these programs. We
present Zest, a technique which automatically guides QuickCheck-like
randominput generators to better explore the semantic analysis stage of test
programs. Zest converts random-input generators into deterministic parametric
generators. We present the key insight that mutations in the untyped parameter
domain map to structural mutations in the input domain. Zest leverages program
feedback in the form of code coverage and input validity to perform
feedback-directed parameter search. We evaluate Zest against AFL and QuickCheck
on five Java programs: Maven, Ant, BCEL, Closure, and Rhino. Zest covers
1.03x-2.81x as many branches within the benchmarks semantic analysis stages as
baseline techniques. Further, we find 10 new bugs in the semantic analysis
stages of these benchmarks. Zest is the most effective technique in finding
these bugs reliably and quickly, requiring at most 10 minutes on average to
find each bug.Comment: To appear in Proceedings of 28th ACM SIGSOFT International Symposium
on Software Testing and Analysis (ISSTA'19
The Progress, Challenges, and Perspectives of Directed Greybox Fuzzing
Most greybox fuzzing tools are coverage-guided as code coverage is strongly
correlated with bug coverage. However, since most covered codes may not contain
bugs, blindly extending code coverage is less efficient, especially for corner
cases. Unlike coverage-guided greybox fuzzers who extend code coverage in an
undirected manner, a directed greybox fuzzer spends most of its time allocation
on reaching specific targets (e.g., the bug-prone zone) without wasting
resources stressing unrelated parts. Thus, directed greybox fuzzing (DGF) is
particularly suitable for scenarios such as patch testing, bug reproduction,
and specialist bug hunting. This paper studies DGF from a broader view, which
takes into account not only the location-directed type that targets specific
code parts, but also the behaviour-directed type that aims to expose abnormal
program behaviours. Herein, the first in-depth study of DGF is made based on
the investigation of 32 state-of-the-art fuzzers (78% were published after
2019) that are closely related to DGF. A thorough assessment of the collected
tools is conducted so as to systemise recent progress in this field. Finally,
it summarises the challenges and provides perspectives for future research.Comment: 16 pages, 4 figure
June: A Type Testability Transformation for Improved ATG Performance
Strings are universal containers: they are flexible to use, abundant in code, and difficult to test. String-controlled programs are programs that make branching decisions based on string input. Automatically generating valid test inputs for these programs considering only character sequences rather than any underlying string-encoded structures, can be prohibitively expensive. We present June, a tool that enables Java developers to expose any present latent string structure to test generation tools. June is an annotation-driven testability transformation and an extensible library, JuneLib, of structured string definitions. The core JuneLib definitions are empirically derived and provide templates for all structured strings in our test set. June takes lightly annotated source code and injects code that permits an automated test generator (ATG) to focus on the creation of mutable substrings inside a structured string. Using June costs the developer little, with an average of 2.1 annotations per string-controlled class. June uses standard Java build tools and therefore deploys seamlessly within a Java project. By feeding string structure information to an ATG tool, June dramatically reduces wasted effort; branches are effortlessly covered that would otherwise be extremely difficult, or impossible, to cover. This waste reduction both increases and speeds coverage. EvoSuite, for example, achieves the same coverage on June-ed classes in 1 minute, on average, as it does in 9 minutes on the un-June-ed class. These gains increase over time. On our corpus, June-ing a program compresses 24 hours of execution time into ca. 2 hours. We show that many ATG tools can reuse the same June-ed code: a few June annotations, a one-off cost, benefit many different testing regimes