129 research outputs found

    Infrared: A Meta Bug Detector

    Full text link
    The recent breakthroughs in deep learning methods have sparked a wave of interest in learning-based bug detectors. Compared to the traditional static analysis tools, these bug detectors are directly learned from data, thus, easier to create. On the other hand, they are difficult to train, requiring a large amount of data which is not readily available. In this paper, we propose a new approach, called meta bug detection, which offers three crucial advantages over existing learning-based bug detectors: bug-type generic (i.e., capable of catching the types of bugs that are totally unobserved during training), self-explainable (i.e., capable of explaining its own prediction without any external interpretability methods) and sample efficient (i.e., requiring substantially less training data than standard bug detectors). Our extensive evaluation shows our meta bug detector (MBD) is effective in catching a variety of bugs including null pointer dereference, array index out-of-bound, file handle leak, and even data races in concurrent programs; in the process MBD also significantly outperforms several noteworthy baselines including Facebook Infer, a prominent static analysis tool, and FICS, the latest anomaly detection method

    Evaluating static analysis defect warnings on production software

    Full text link

    Demand-Driven Alias Analysis for C

    Full text link
    This paper presents a demand-driven, flow-insensitive analysis algorithm for answering may-alias queries. We formulate the computation of alias queries as a CFL-reachability problem, and use this formulation to derive a demand-driven analysis algorithm. The analysis uses a worklist algorithm that gradually explores the program structure and stops as soon as enough evidence is gathered to answer the query. Unlike existing techniques, our approach does not require building or intersecting points-to sets. Experiments show that our technique is effective at answering alias queries accurately and efficiently in a demand-driven fashion. For a set of alias queries from the SPEC2000 benchmarks, our analysis is able to accurately answer 97% of the queries in less than 1 millisecond per query. Compared to a demand-driven points-to analysis that constructs and intersects points-to sets on-the-fly, our alias analysis is more than two times faster

    Improving Quality of Software with Foreign Function Interfaces using Static Analysis

    Get PDF
    A Foreign Function Interface (FFI) is a mechanism that allows software written in one host programming language to directly use another foreign programming language by invoking function calls across language boundaries. Today\u27s software development often utilizes FFIs to reuse software components. Examples of such systems are the Java Development Kit (JDK), Android mobile OS, and Python packages in the Fedora LINUX operating systems. The use of FFIs, however, requires extreme care and can introduce undesired side effects that degrade software quality. In this thesis, we aim to improve several quality aspects of software composed of FFIs by applying static analysis. The thesis investigates several particular characteristics of FFIs and studies software bugs caused by the misuse of FFIs. We choose two FFIs, the Java Native Interface (JNI) and the Python/C interface, as the main subjects of this dissertation. To reduce software security vulnerabilities introduced by the JNI, we first propose definitions of new patterns of bugs caused by the improper exception handlings between Java and C. We then present the design and implement a bug finding system to uncover these bugs. To ensure software safety and reliability in multithreaded environment, we present a novel and efficient system that ensures atomicity in the JNI. Finally, to improve software performance and reliability, we design and develop a framework for finding errors in memory management in programs written with the Python/C interface. The framework is built by applying affine abstraction and affine analysis of reference-counts of Python objects. This dissertation offers a comprehensive study of FFIs and software composed of FFIs. The research findings make several contributions to the studies of static analysis and to the improvement of software quality

    Structural Analysis: Shape Information via Points-To Computation

    Full text link
    This paper introduces a new hybrid memory analysis, Structural Analysis, which combines an expressive shape analysis style abstract domain with efficient and simple points-to style transfer functions. Using data from empirical studies on the runtime heap structures and the programmatic idioms used in modern object-oriented languages we construct a heap analysis with the following characteristics: (1) it can express a rich set of structural, shape, and sharing properties which are not provided by a classic points-to analysis and that are useful for optimization and error detection applications (2) it uses efficient, weakly-updating, set-based transfer functions which enable the analysis to be more robust and scalable than a shape analysis and (3) it can be used as the basis for a scalable interprocedural analysis that produces precise results in practice. The analysis has been implemented for .Net bytecode and using this implementation we evaluate both the runtime cost and the precision of the results on a number of well known benchmarks and real world programs. Our experimental evaluations show that the domain defined in this paper is capable of precisely expressing the majority of the connectivity, shape, and sharing properties that occur in practice and, despite the use of weak updates, the static analysis is able to precisely approximate the ideal results. The analysis is capable of analyzing large real-world programs (over 30K bytecodes) in less than 65 seconds and using less than 130MB of memory. In summary this work presents a new type of memory analysis that advances the state of the art with respect to expressive power, precision, and scalability and represents a new area of study on the relationships between and combination of concepts from shape and points-to analyses

    Diagnosys: Automatic Generation of a Debugging Interface to the Linux Kernel

    Get PDF
    Best Paper awardInternational audienceThe Linux kernel does not export a stable, well-defined kernel interface, complicating the development of kernel-level services, such as device drivers and file systems. While there does exist a set of functions that are exported to external modules, this set of functions frequently changes, and the functions have implicit, ill-documented preconditions. No specific debugging support is provided. We present \textit{Diagnosys}, an approach to automatically constructing a debugging interface for the Linux kernel. First, a designated kernel maintainer ses Diagnosys to identify constraints on the use of the exported functions. Based on this information, developers of kernel services can then use Diagnosys to generate a debugging interface specialized to their code. When a service including this interface is tested, it records information about potential problems. This information is preserved following a kernel crash or hang. Our experiments show that the generated debugging interface provides useful log information and incurs a low performance penalty

    Automatic Static Bug Detection for Machine Learning Libraries: Are We There Yet?

    Full text link
    Automatic detection of software bugs is a critical task in software security. Many static tools that can help detect bugs have been proposed. While these static bug detectors are mainly evaluated on general software projects call into question their practical effectiveness and usefulness for machine learning libraries. In this paper, we address this question by analyzing five popular and widely used static bug detectors, i.e., Flawfinder, RATS, Cppcheck, Facebook Infer, and Clang static analyzer on a curated dataset of software bugs gathered from four popular machine learning libraries including Mlpack, MXNet, PyTorch, and TensorFlow with a total of 410 known bugs. Our research provides a categorization of these tools' capabilities to better understand the strengths and weaknesses of the tools for detecting software bugs in machine learning libraries. Overall, our study shows that static bug detectors find a negligible amount of all bugs accounting for 6/410 bugs (0.01%), Flawfinder and RATS are the most effective static checker for finding software bugs in machine learning libraries. Based on our observations, we further identify and discuss opportunities to make the tools more effective and practical

    STANSE: Bug-finding Framework for C Programs

    Get PDF
    Regular paper accepted at the MEMICS 2011 workshop. The paper deals with static analysis. It also describes a framework and tool called Stanse

    Static Analysis in Practice

    Get PDF
    Static analysis tools search software looking for defects that may cause an application to deviate from its intended behavior. These include defects that compute incorrect values, cause runtime exceptions or crashes, expose applications to security vulnerabilities, or lead to performance degradation. In an ideal world, the analysis would precisely identify all possible defects. In reality, it is not always possible to infer the intent of a software component or code fragment, and static analysis tools sometimes output spurious warnings or miss important bugs. As a result, tool makers and researchers focus on developing heuristics and techniques to improve speed and accuracy. But, in practice, speed and accuracy are not sufficient to maximize the value received by software makers using static analysis. Software engineering teams need to make static analysis an effective part of their regular process. In this dissertation, I examine the ways static analysis is used in practice by commercial and open source users. I observe that effectiveness is hampered, not only by false warnings, but also by true defects that do not affect software behavior in practice. Indeed, mature production systems are often littered with true defects that do not prevent them from functioning, mostly correctly. To understand why this occurs, observe that developers inadvertently create both important and unimportant defects when they write software, but most quality assurance activities are directed at finding the important ones. By the time the system is mature, there may still be a few consequential defects that can be found by static analysis, but they are drowned out by the many true but low impact defects that were never fixed. An exception to this rule is certain classes of subtle security, performance, or concurrency defects that are hard to detect without static analysis. Software teams can use static analysis to find defects very early in the process, when they are cheapest to fix, and in so doing increase the effectiveness of later quality assurance activities. But this effort comes with costs that must be managed to ensure static analysis is worthwhile. The cost effectiveness of static analysis also depends on the nature of the defect being sought, the nature of the application, the infrastructure supporting tools, and the policies governing its use. Through this research, I interact with real users through surveys, interviews, lab studies, and community-wide reviews, to discover their perspectives and experiences, and to understand the costs and challenges incurred when adopting static analysis tools. I also analyze the defects found in real systems and make observations about which ones are fixed, why some seemingly serious defects persist, and what considerations static analysis tools and software teams should make to increase effectiveness. Ultimately, my interaction with real users confirms that static analysis is well received and useful in practice, but the right environment is needed to maximize its return on investment
    • …
    corecore