38 research outputs found

    Test-case reduction for C compiler bugs

    Get PDF
    ManuscriptTo report a compiler bug, one must often find a small test case that triggers the bug. The existing approach to automated test-case reduction, delta debugging, works by removing substrings of the original input; the result is a concatenation of substrings that delta cannot remove. We have found this approach less than ideal for reducing C programs because it typically yields test cases that are too large or even invalid (relying on undefined behavior). To obtain small and valid test cases consistently, we designed and implemented three new, domain-specific test-case reducers. The best of these is based on a novel framework in which a generic fixpoint computation invokes modular transformations that perform reduction operations. This reducer produces outputs that are, on average, more than 25 times smaller than those produced by our other reducers or by the existing reducer that is most commonly used by compiler developers. We conclude that effective program reduction requires more than straightforward delta debugging

    Test Cases Selection Based on Source Code Features Extraction

    Get PDF
    Extracting valuable information from source code automatically was the subject of many research papers. Such information can be used for document traceability, concept or feature extraction, etc. In this paper, we used an Information Retrieval (IR) technique: Latent Semantic Indexing (LSI) for the automatic extraction of source code concepts for the purpose of test cases\u27 reduction. We used and updated the open source FLAT Eclipse add on to try several code stemming approaches. The goal is to check the best approach to extract code concepts that can improve the process of test cases\u27 selection or reduction

    Many-core compiler fuzzing

    Get PDF
    We address the compiler correctness problem for many-core systems through novel applications of fuzz testing to OpenCL compilers. Focusing on two methods from prior work, random differential testing and testing via equivalence modulo inputs (EMI), we present several strategies for random generation of deterministic, communicating OpenCL kernels, and an injection mechanism that allows EMI testing to be applied to kernels that otherwise exhibit little or no dynamically-dead code. We use these methods to conduct a large, controlled testing campaign with respect to 21 OpenCL (device, compiler) configurations, covering a range of CPU, GPU, accelerator, FPGA and emulator implementations. Our study provides independent validation of claims in prior work related to the effectiveness of random differential testing and EMI testing, proposes novel methods for lifting these techniques to the many-core setting and reveals a significant number of OpenCL compiler bugs in commercial implementations

    SYSFLOW: Efficient Execution Platform for IoT Devices

    Full text link
    Traditional executable delivery models pose challenges for IoT devices with limited storage, necessitating the download of complete executables and dependencies. Network solutions like NFS, designed for data files, encounter high IO overhead for irregular access patterns. This paper introduces SYSFLOW, a lightweight network-based executable delivery system for IoT. SYSFLOW delivers on-demand, redirecting local disk IO to the server through optimized network IO. To optimize cache hit rates, SYSFLOW employs server-side action-based prefetching, reducing latency by 45.1% to 75.8% compared to native Linux filesystems on SD cards. In wired environments, SYSFLOW's latency is up to 67.7% lower than NFS. In wireless scenarios, SYSFLOW performs 22.9% worse than Linux, comparable with Linux and outperforming NFS by up to 60.7%. While SYSFLOW's power consumption may be 6.7% higher than NFS, it offers energy savings due to lower processing time

    Chainsaw Before Scalpel: Dependency-Based Pre-processing for Program Reduction

    Get PDF
    Program reduction techniques, which aim to minimize the size of a program, have many applications, including software debloating, debugging, and optimization in general. Therefore, these techniques have been extensively studied for decades. Past work in this area has typically focused on either performing larger, more effective edits (e.g., Delta Debugging, Hierarchical Delta Debugging) or reducing the search space based on a language grammar (e.g., Perses, C-Reduce). Most of these techniques had a primary goal of minimal output size, with reduction speed only as a secondary goal. We propose Chainsaw, a novel approach that improves existing techniques by offloading a subset of the reduction to a pre-processing step. Since Chainsaw does not need to be as thorough as existing reducers, this creates an opportunity to take a new approach which can benefit overall end-to-end performance. Our key insight is that in practical application, a considerable amount of input code is not needed, and dependency analysis enables effective and fast identification of this removable code. This dependency analysis is both general, thus easily applicable to different languages, and inexpensive, thus amenable to a speedy pre-processing step. Such analysis can enable the higher-fidelity techniques previously developed to skip a significant quantity of work and produce better results more quickly. We also present a prototype tool based on our approach. Our tool finds unused sections of code by analyzing the dependencies between items in the input text and is straightforward to implement. We leverage existing analysis tooling via the Language Server Protocol to easily identify dependencies. Our initial results are promising and show that our approach is extremely fast and can yield up to twofold end-to-end speed improvement when used as a pre-processor with existing state-of-the-art techniques.Undergraduat

    Cause reduction for quick testing

    Get PDF
    pre-printAbstract-In random testing, it is often desirable to produce a "quick test" - an extremely inexpensive test suite that can serve as a frequently applied regression and allow the benefits of random testing to be obtained even in very slow or oversubscribed test environments. Delta debugging is an algorithm that, given a failing test case, produces a smaller test case that also fails, and typically executes much more quickly. Delta debugging of random tests can produce effective regression suites for previously detected faults, but such suites often have little power for detecting new faults, and in some cases provide poor code coverage. This paper proposes extending delta debugging by simplifying tests with respect to code coverage, an instance of a generalization of delta debugging we call cause reduction. We show that test suites reduced in this fashion can provide very effective quick tests for real-world programs. For Mozilla's SpiderMonkey JavaScript engine, the reduced suite is more effective for finding software faults, even if its reduced runtime is not considered. The effectiveness of a reduction-based quick test persists through major changes to the software under test

    Augmenting American Fuzzy Lop to Increase the Speed of Bug Detection

    Get PDF
    Whitebox fuzz testing is a vital part of the software testing process in the software development life cycle (SDLC). It is used for bug detection and security vulnerability checking as well. But current tools lack the ability to detect all the bugs and cover the entire code under test in a reasonable time. This study will explore some of the various whitebox fuzzing techniques and tools (AFL, SAGE, Driller, etc.) currently in use followed by a discussion of their strategies and the challenges facing them. One of the most popular state-of-the-art fuzzers, American Fuzzy Lop (AFL) will be discussed in detail and the modifications proposed to reduce the time required by it while functioning under QEMU emulation mode will be put forth. The study found that the AFL fuzzer can be sped up by injecting an intermediary layer of code in the Tiny Code Generator (TCG) that helps in translating blocks between the two architectures being used for testing. The modified version of AFL was able to find a mean 1.6 crashes more than the basic AFL running in QEMU mode. The study will then recommend future research avenues in the form of hybrid techniques to resolve the challenges faced by the state of the art fuzzers and create an optimal fuzzing tool. The motivation behind the study is to optimize the fuzzing process in order to reduce the time taken to perform software testing and produce robust, error-free software products
    corecore