150 research outputs found

    Verifying multi-threaded software using SMT-based context-bounded model checking

    No full text
    We describe and evaluate three approaches to model check multi-threaded software with shared variables and locks using bounded model checking based on Satisfiability Modulo Theories (SMT) and our modelling of the synchronization primitives of the Pthread library. In the lazy approach, we generate all possible interleavings and call the SMT solver on each of them individually, until we either find a bug, or have systematically explored all interleavings. In the schedule recording approach, we encode all possible interleavings into one single formula and then exploit the high speed of the SMT solvers. In the underapproximation and widening approach, we reduce the state space by abstracting the number of interleavings from the proofs of unsatisfiability generated by the SMT solvers. In all three approaches, we bound the number of context switches allowed among threads in order to reduce the number of interleavings explored. We implemented these approaches in ESBMC, our SMT-based bounded model checker for ANSI-C programs. Our experiments show that ESBMC can analyze larger problems and substantially reduce the verification time compared to state-of-the-art techniques that use iterative context-bounding algorithms or counter-example guided abstraction refinement

    Strong Memory Consistency For Parallel Programming

    Get PDF
    Correctly synchronizing multithreaded programs is challenging, and errors can lead to program failures (e.g., atomicity violations). Existing memory consistency models rule out some possible failures, but are limited by depending on subtle programmer-defined locking code and by providing unintuitive semantics for incorrectly synchronized code. Stronger memory consistency models assist programmers by providing them with easier-to-understand semantics with regard to memory access interleavings in parallel code. This dissertation proposes a new strong memory consistency model based on ordering-free regions (OFRs), which are spans of dynamic instructions between consecutive ordering constructs (e.g. barriers). Atomicity over ordering-free regions provides stronger atomicity than existing strong memory consistency models with competitive performance. Ordering-free regions also simplify programmer reasoning by limiting the potential for atomicity violations to fewer points in the program’s execution. This dissertation explores both software-only and hardware-supported systems that provide OFR serializability

    Techniques for Detection, Root Cause Diagnosis, and Classification of In-Production Concurrency Bugs

    Get PDF
    Concurrency bugs are at the heart of some of the worst bugs that plague software. Concurrency bugs slow down software development because it can take weeks or even months before developers can identify and fix them. In-production detection, root cause diagnosis, and classification of concurrency bugs is challenging. This is because these activities require heavyweight analyses such as exploring program paths and determining failing program inputs and schedules, all of which are not suited for software running in production. This dissertation develops practical techniques for the detection, root cause diagnosis, and classification of concurrency bugs for inproduction software. Furthermore, we develop ways for developers to better reason about concurrent programs. This dissertation builds upon the following principles: — The approach in this dissertation spans multiple layers of the system stack, because concurrency spans many layers of the system stack. — It performs most of the heavyweight analyses in-house and resorts to minimal in-production analysis in order to move the heavy lifting to where it is least disruptive. — It eschews custom hardware solutions that may be infeasible to implement in the real world. Relying on the aforementioned principles, this dissertation introduces: 1. Techniques to automatically detect concurrency bugs (data races and atomicity violations) in-production by combining in-house static analysis and in-production dynamic analysis. 2. A technique to automatically identify the root causes of in-production failures, with a particular emphasis on failures caused by concurrency bugs. 3. A technique that given a data race, automatically classifies it based on its potential consequence, allowing developers to answer questions such as “can the data race cause a crash or a hang?”, or “does the data race have any observable effect?”. We build a toolchain that implements all the aforementioned techniques. We show that the tools we develop in this dissertation are effective, incur low runtime performance overhead, and have high accuracy and precision

    Automatic Detection, Validation and Repair of Race Conditions in Interrupt-Driven Embedded Software

    Full text link
    Interrupt-driven programs are widely deployed in safety-critical embedded systems to perform hardware and resource dependent data operation tasks. The frequent use of interrupts in these systems can cause race conditions to occur due to interactions between application tasks and interrupt handlers (or two interrupt handlers). Numerous program analysis and testing techniques have been proposed to detect races in multithreaded programs. Little work, however, has addressed race condition problems related to hardware interrupts. In this paper, we present SDRacer, an automated framework that can detect, validate and repair race conditions in interrupt-driven embedded software. It uses a combination of static analysis and symbolic execution to generate input data for exercising the potential races. It then employs virtual platforms to dynamically validate these races by forcing the interrupts to occur at the potential racing points. Finally, it provides repair candidates to eliminate the detected races. We evaluate SDRacer on nine real-world embedded programs written in C language. The results show that SDRacer can precisely detect and successfully fix race conditions.Comment: This is a draft version of the published paper. Ke Wang provides suggestions for improving the paper and README of the GitHub rep

    Dynamic Analysis of Embedded Software

    Get PDF
    abstract: Most embedded applications are constructed with multiple threads to handle concurrent events. For optimization and debugging of the programs, dynamic program analysis is widely used to collect execution information while the program is running. Unfortunately, the non-deterministic behavior of multithreaded embedded software makes the dynamic analysis difficult. In addition, instrumentation overhead for gathering execution information may change the execution of a program, and lead to distorted analysis results, i.e., probe effect. This thesis presents a framework that tackles the non-determinism and probe effect incurred in dynamic analysis of embedded software. The thesis largely consists of three parts. First of all, we discusses a deterministic replay framework to provide reproducible execution. Once a program execution is recorded, software instrumentation can be safely applied during replay without probe effect. Second, a discussion of probe effect is presented and a simulation-based analysis is proposed to detect execution changes of a program caused by instrumentation overhead. The simulation-based analysis examines if the recording instrumentation changes the original program execution. Lastly, the thesis discusses data race detection algorithms that help to remove data races for correctness of the replay and the simulation-based analysis. The focus is to make the detection efficient for C/C++ programs, and to increase scalability of the detection on multi-core machines.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Accurately Classifying Data Races with Portend

    Get PDF
    Even though most data races are harmless, the harmful ones are at the heart of some of the worst concurrency bugs. Eliminating all data races from programs is impractical (e.g., system performance could suffer severely), yet spotting just the harmful ones is like finding a needle in a haystack: state-of-the-art data race detectors and classifiers suffer from high false positive rates of 37%–84%. We present Portend, a technique and system for automatically triaging suspect data races based on their potential consequences: Could they lead to crashes or hangs? Alter system state? Could their effects be externalized? Or are they harmless? Our proposed technique achieves very high accuracy by efficiently analyzing multiple paths and multiple thread schedules in combination, and by performing symbolic comparison between program states. We ran Portend on several dozen data races from real-world applications, and it correctly classified all of them, with no human effort. It also produced easy-to-understand evidence of the consequences of harmful races, thus proving their harmfulness and making debugging easier. We envision using Portend for testing and debugging, as well as for automatically triaging bug reports

    SABRes: Atomic Object Reads for In-Memory Rack-Scale Computing

    Get PDF
    Modern in-memory services rely on large distributed object stores to achieve the high scalability essential to service thousands of requests concurrently. The independent and unpredictable nature of incoming requests results in random accesses to the object store, triggering frequent remote memory accesses. State-of-the-art distributed memory frameworks leverage the one-sided operations offered by RDMA technology to mitigate the traditionally high cost of remote memory access. Unfortunately, the limited semantics of RDMA one-sided operations bound remote memory access atomicity to a single cache block; therefore, atomic remote object access relies on software mechanisms. Emerging highly integrated rack-scale systems that reduce the latency of one-sided operations to a small multiple of DRAM latency expose the overhead of these software mechanisms as a major latency contributor. This technology-triggered paradigm shift calls for new one-sided operations with stronger semantics. We take a step in that direction by proposing SABRes, a new one-sided operation that provides atomic remote object reads in hardware. We then present LightSABRes, a lightweight hardware accelerator for SABRes that removes all atomicity-associated software overheads. Compared to a state-of-the-art software atomicity mechanism, LightSABRes improve the throughput of a microbenchmark atomically accessing 128B-8KB objects from remote memory by 15-97%, and the throughput of a modern in-memory distributed object store by 30-60%

    Data Races vs. Data Race Bugs: Telling the Difference with Portend

    Get PDF
    Even though most data races are harmless, the harmful ones are at the heart of some of the worst concurrency bugs. Alas, spotting just the harmful data races in programs is like finding a needle in a haystack: 76%-90% of the true data races reported by state-of-the- art race detectors turn out to be harmless [45]. We present Portend, a tool that not only detects races but also automatically classifies them based on their potential con- sequences: Could they lead to crashes or hangs? Could their effects be visible outside the program? Are they harmless? Our proposed technique achieves high accuracy by efficiently analyzing multiple paths and multiple thread schedules in combination, and by performing symbolic comparison between program outputs. We ran Portend on 7 real-world applications: it detected 93 true data races and correctly classified 92 of them, with no human effort. 6 of them are harmful races. Portend’s classification accuracy is up to 88% higher than that of existing tools, and it produces easy- to-understand evidence of the consequences of harmful races, thus both proving their harmfulness and making debugging easier. We envision Portend being used for testing and debugging, as well as for automatically triaging bug reports
    • 

    corecore