5,827 research outputs found
Differentially Testing Soundness and Precision of Program Analyzers
In the last decades, numerous program analyzers have been developed both by
academia and industry. Despite their abundance however, there is currently no
systematic way of comparing the effectiveness of different analyzers on
arbitrary code. In this paper, we present the first automated technique for
differentially testing soundness and precision of program analyzers. We used
our technique to compare six mature, state-of-the art analyzers on tens of
thousands of automatically generated benchmarks. Our technique detected
soundness and precision issues in most analyzers, and we evaluated the
implications of these issues to both designers and users of program analyzers
Performance regression testing of concurrent classes
Developers of thread-safe classes struggle with two oppos-ing goals. The class must be correct, which requires syn-chronizing concurrent accesses, and the class should pro-vide reasonable performance, which is difficult to realize in the presence of unnecessary synchronization. Validating the performance of a thread-safe class is challenging because it requires diverse workloads that use the class, because ex-isting performance analysis techniques focus on individual bottleneck methods, and because reliably measuring the per-formance of concurrent executions is difficult. This paper presents SpeedGun, an automatic performance regression testing technique for thread-safe classes. The key idea is to generate multi-threaded performance tests and to com-pare two versions of a class with each other. The analysis notifies developers when changing a thread-safe class signif-icantly influences the performance of clients of this class. An evaluation with 113 pairs of classes from popular Java projects shows that the analysis effectively identifies 13 per-formance differences, including performance regressions that the respective developers were not aware of
IntRepair: Informed Repairing of Integer Overflows
Integer overflows have threatened software applications for decades. Thus, in
this paper, we propose a novel technique to provide automatic repairs of
integer overflows in C source code. Our technique, based on static symbolic
execution, fuses detection, repair generation and validation. This technique is
implemented in a prototype named IntRepair. We applied IntRepair to 2,052C
programs (approx. 1 million lines of code) contained in SAMATE's Juliet test
suite and 50 synthesized programs that range up to 20KLOC. Our experimental
results show that IntRepair is able to effectively detect integer overflows and
successfully repair them, while only increasing the source code (LOC) and
binary (Kb) size by around 1%, respectively. Further, we present the results of
a user study with 30 participants which shows that IntRepair repairs are more
than 10x efficient as compared to manually generated code repairsComment: Accepted for publication at the IEEE TSE journal. arXiv admin note:
text overlap with arXiv:1710.0372
Recommended from our members
Contract-driven data structure repair : a novel approach for error recovery
textSoftware systems are now pervasive throughout our world. The reliability of these systems is an urgent necessity. A large degree of research effort on increasing software reliability is dedicated to requirements, architecture, design, implementation and testing---activities that are performed before system deployment. While such approaches have become substantially more advanced, software remains buggy and failures remain expensive. We take a radically different approach to reliability from previous approaches, namely contract-driven data structure repair for runtime error recovery, where erroneous executions of deployed software are corrected on-the-fly using rich behavioral contracts. Our key insight is to transform the software contract---which gives a high level description of the expected behavior---to an efficient implementation which repairs the erroneous data structures in the program state upon an error. To improve efficiency, scalability, and effectiveness of repair, in addition to rich behavioral contracts, we leverage the current erroneous state, dynamic behavior of the program, as well as repair history and abstraction. A core technical problem our approach to repair addresses is construction of structurally complex data that satisfy desired properties. We present a novel structure generation technique based on dynamic programming---a classic optimization approach---to utilize the recursive nature of the structures. We use our technique for constraint-based testing. It provides better scalability than previous work. We applied it to test widely-used web browsers and found some known and unknown bugs. Our use of dynamic programming in structure generation opens a new future direction to tackle the scalability problem of data structure repair. This research advances our ability to develop correct programs. For programs that already have contracts, error recovery using our approach can come at a low cost. The same contracts can be used for systematically testing code before deployment using existing as well as our new techniques. Thus, we enable a novel unification of software verification and error recovery.Electrical and Computer Engineerin
ACETest: Automated Constraint Extraction for Testing Deep Learning Operators
Deep learning (DL) applications are prevalent nowadays as they can help with
multiple tasks. DL libraries are essential for building DL applications.
Furthermore, DL operators are the important building blocks of the DL
libraries, that compute the multi-dimensional data (tensors). Therefore, bugs
in DL operators can have great impacts. Testing is a practical approach for
detecting bugs in DL operators. In order to test DL operators effectively, it
is essential that the test cases pass the input validity check and are able to
reach the core function logic of the operators. Hence, extracting the input
validation constraints is required for generating high-quality test cases.
Existing techniques rely on either human effort or documentation of DL library
APIs to extract the constraints. They cannot extract complex constraints and
the extracted constraints may differ from the actual code implementation.
To address the challenge, we propose ACETest, a technique to automatically
extract input validation constraints from the code to build valid yet diverse
test cases which can effectively unveil bugs in the core function logic of DL
operators. For this purpose, ACETest can automatically identify the input
validation code in DL operators, extract the related constraints and generate
test cases according to the constraints. The experimental results on popular DL
libraries, TensorFlow and PyTorch, demonstrate that ACETest can extract
constraints with higher quality than state-of-the-art (SOTA) techniques.
Moreover, ACETest is capable of extracting 96.4% more constraints and detecting
1.95 to 55 times more bugs than SOTA techniques. In total, we have used ACETest
to detect 108 previously unknown bugs on TensorFlow and PyTorch, with 87 of
them confirmed by the developers. Lastly, five of the bugs were assigned with
CVE IDs due to their security impacts.Comment: Accepted by ISSTA 202
- …