32,841 research outputs found

    Targeted Greybox Fuzzing with Static Lookahead Analysis

    Full text link
    Automatic test generation typically aims to generate inputs that explore new paths in the program under test in order to find bugs. Existing work has, therefore, focused on guiding the exploration toward program parts that are more likely to contain bugs by using an offline static analysis. In this paper, we introduce a novel technique for targeted greybox fuzzing using an online static analysis that guides the fuzzer toward a set of target locations, for instance, located in recently modified parts of the program. This is achieved by first semantically analyzing each program path that is explored by an input in the fuzzer's test suite. The results of this analysis are then used to control the fuzzer's specialized power schedule, which determines how often to fuzz inputs from the test suite. We implemented our technique by extending a state-of-the-art, industrial fuzzer for Ethereum smart contracts and evaluate its effectiveness on 27 real-world benchmarks. Using an online analysis is particularly suitable for the domain of smart contracts since it does not require any code instrumentation---instrumentation to contracts changes their semantics. Our experiments show that targeted fuzzing significantly outperforms standard greybox fuzzing for reaching 83% of the challenging target locations (up to 14x of median speed-up)

    QChecker: Detecting Bugs in Quantum Programs via Static Analysis

    Full text link
    Static analysis is the process of analyzing software code without executing the software. It can help find bugs and potential problems in software that may only appear at runtime. Although many static analysis tools have been developed for classical software, due to the nature of quantum programs, these existing tools are unsuitable for analyzing quantum programs. This paper presents QChecker, a static analysis tool that supports finding bugs in quantum programs in Qiskit. QChecker consists of two main modules: a module for extracting program information based on abstract syntax tree (AST), and a module for detecting bugs based on patterns. We evaluate the performance of QChecker using the Bugs4Q benchmark. The evaluation results show that QChecker can effectively detect various bugs in quantum programs.Comment: This paper will be appeared in the proceedings of the 4th International Workshop on Quantum Software Engineering (Q-SE 2023) on May 14, 202

    Diagnose network failures via data-plane analysis

    Get PDF
    Diagnosing problems in networks is a time-consuming and error-prone process. Previous tools to assist operators primarily focus on analyzing control plane configuration. Configuration analysis is limited in that it cannot find bugs in router software, and is harder to generalize across protocols since it must model complex configuration languages and dynamic protocol behavior. This paper studies an alternate approach: diagnosing problems through static analysis of the data plane. This approach can catch bugs that are invisible at the level of configuration files, and simplifies unified analysis of a network across many protocols and implementations. We present Anteater, a tool for checking invariants in the data plane. Anteater translates high-level network invariants into boolean satisfiability problems, checks them against network state using a SAT solver, and reports counterexamples if violations have been found. Applied to a large campus network, Anteater revealed 23 bugs, including forwarding loops and stale ACL rules, with only five false positives. Nine of these faults are being fixed by campus network operators

    Simple and Effective Static Analysis to Find Bugs

    Get PDF
    Much research in recent years has focused on using static analysis to find bugs in software. Many new approaches employing sophisticated program analysis techniques---inter-procedural, context-sensitive, or path-sensitive---have been developed. However, comparatively little work has been done on determining what bugs can be found using <i>simple</i> analysis techniques. We have found that simple static analysis techniques are effective at finding hundreds or thousands of serious software defects in several large commercial software applications. In our research, we have attempted to characterize the bugs that can be found in production software using simple analysis techniques. Examples of simple analysis techniques include inspection of class hierarchies and method signatures, sequential scanning of program instructions using a state machine recognizer, intra-procedural dataflow analysis, and flow-insensitive whole program analysis. To determine what bugs may be found using these techniques, we performed <i>bug-driven</i> research. Starting from examples of real bugs, we tried to develop simple analysis techniques to find similar bugs. Using this approach, we found a large number of serious bugs in production applications and libraries with a relatively low percentage of false positives. The types of bugs our analysis uncovered in production code include null pointer dereferences, infinite recursive loops, data races, deadlocks, and missed thread notifications. One product of this work is a static analysis tool called FindBugs, which analyzes Java programs at the bytecode level. We have distributed FindBugs under an open-source license, and it has been widely adopted by commercial organizations and open-source projects. FindBugs has been downloaded more than 110,000 times since its initial release. Our work makes several contributions to the field. First, we have cataloged many commonly-occurring bug patterns, described effective ways of finding occurrences of those patterns automatically, and classified common reasons why these bugs occur. Second, we have measured the accuracy of our bug detectors on production software and student programming projects, validating the effectiveness of the underlying static analysis techniques. Finally, we have described techniques for determining when static analysis warnings are added or removed as software evolves

    Improving Software Quality Attributes of PS Using Stylecop

    Get PDF
    Software product quality improvement is a desired attribute and a strenuous effort is required to achieve that Static Code Analysis SCA is used to find coding bugs Stylecop is an SCA tool from Microsoft In this paper SCA and stylecop usages are discussed Also a comparison of Software Testing software Metrics Source Code Analysis SCA is done Product PS designed by author1 and author2 is introduced PS application is analyzed using static SCA tool Stylecop After analysis recommendations were applied to PS The results showed improved software quality attribute

    WYSIWIB: A Declarative Approach to Finding API Protocols and Bugs in Linux Code

    Get PDF
    International audienceEliminating OS bugs is essential to ensuring the reliability of infrastructures ranging from embedded systems to servers. Several tools based on static analysis have been proposed for finding bugs in OS code. They have, however, emphasized scalability over usability, making it difficult to focus the tools on specific kinds of bugs and to relate the results to patterns in the source code. We propose a declarative approach to bug finding in Linux OS code using a control-flow based program search engine. Our approach is WYSIWIB (What You See Is Where It Bugs), since the programmer expresses specifications for bug finding using a syntax close to that of ordinary C code. The key advantage of our approach is that search specifications can be easily tailored, to eliminate false positives or catch more bugs. We present three case studies that have allowed us to find hundreds of potential bugs

    How Effective are Smart Contract Analysis Tools? Evaluating Smart Contract Static Analysis Tools Using Bug Injection

    Full text link
    Security attacks targeting smart contracts have been on the rise, which have led to financial loss and erosion of trust. Therefore, it is important to enable developers to discover security vulnerabilities in smart contracts before deployment. A number of static analysis tools have been developed for finding security bugs in smart contracts. However, despite the numerous bug-finding tools, there is no systematic approach to evaluate the proposed tools and gauge their effectiveness. This paper proposes SolidiFI, an automated and systematic approach for evaluating smart contract static analysis tools. SolidiFI is based on injecting bugs (i.e., code defects) into all potential locations in a smart contract to introduce targeted security vulnerabilities. SolidiFI then checks the generated buggy contract using the static analysis tools, and identifies the bugs that the tools are unable to detect (false-negatives) along with identifying the bugs reported as false-positives. SolidiFI is used to evaluate six widely-used static analysis tools, namely, Oyente, Securify, Mythril, SmartCheck, Manticore and Slither, using a set of 50 contracts injected by 9369 distinct bugs. It finds several instances of bugs that are not detected by the evaluated tools despite their claims of being able to detect such bugs, and all the tools report many false positivesComment: ISSTA 202
    corecore