86 research outputs found

    Specification and Verification of Synchronization with Condition Variables.

    Get PDF

    Automatic Detection, Validation and Repair of Race Conditions in Interrupt-Driven Embedded Software

    Full text link
    Interrupt-driven programs are widely deployed in safety-critical embedded systems to perform hardware and resource dependent data operation tasks. The frequent use of interrupts in these systems can cause race conditions to occur due to interactions between application tasks and interrupt handlers (or two interrupt handlers). Numerous program analysis and testing techniques have been proposed to detect races in multithreaded programs. Little work, however, has addressed race condition problems related to hardware interrupts. In this paper, we present SDRacer, an automated framework that can detect, validate and repair race conditions in interrupt-driven embedded software. It uses a combination of static analysis and symbolic execution to generate input data for exercising the potential races. It then employs virtual platforms to dynamically validate these races by forcing the interrupts to occur at the potential racing points. Finally, it provides repair candidates to eliminate the detected races. We evaluate SDRacer on nine real-world embedded programs written in C language. The results show that SDRacer can precisely detect and successfully fix race conditions.Comment: This is a draft version of the published paper. Ke Wang provides suggestions for improving the paper and README of the GitHub rep

    A compiler cost model for speculative multithreading chip-multiprocessor architectures

    Get PDF

    The Java Memory Model

    Get PDF
    After many years, support for multithreading has been integrated into mainstream programming languages. Inclusion of this feature brings with it a need for a clear and direct explanation of how threads interact through memory. Programmers need to be told, simply and clearly, what might happen when their programs execute. Compiler writers need to be able to work their magic without interfering with the promises that are made to programmers. Java's original threading specification, its memory model, was fundamentally flawed. Some language features, like volatile fields, were under-specified: their treatment was so weak as to render them useless. Other features, including fields without access modifiers, were over-specified: the memory model prevents almost all optimizations of code containing these "normal" fields. Finally, some features, like final fields, had no specification at all beyond that of normal fields; no additional guarantees were provided about what will happen when they are used. This work has attempted to remedy these limitations. We provide a clear and concise definition of thread interaction. It is sufficiently simple for programmers to work with, and flexible enough to take advantage of compiler and processor-level optimizations. We also provide formal and informal techniques for verifying that the model provides this balance. These issues had never been addressed for any programming language: in addressing them for Java, this dissertation provides a framework for all multithreaded languages. The work described in this dissertation has been incorporated into the version 5.0 of the Java programming language

    Comparative modelling and verification of Pthreads and Dthreads

    Get PDF
    The POSIX threads (Pthreads) library is a thread API for C/C++ to control parallel threads and spawn concurrent process flows. Programming in Pthreads usually suffers from undesirable deadlock, data race, and race condition problems due to the potential nondeterministic execution behaviors between parallel threads. Dthreads, as another multithreading model that re-implements Pthreads, was proposed by Liu et al for efficient deterministic multithreading. They found out that, under specific test cases, Dthreads can effectively prevent data races. However, no comparison test has been made with Pthreads. To perform a formal comparison between Pthreads and Dthreads over deadlocks, data races, and race conditions, in this paper, we adopt CSP (communicating sequential processes) as a formal model for specifying part of API functions in Pthreads and Dthreads and illustrate the model construction using 4 classical example programs. By feeding the models into the model checker PAT (process analysis toolkit), we have verified that deadlocks and data races exist in Pthreads, but do not exist in Dthreads, for the considered programs. We have also found that neither of them can prevent race conditions. Our comparative modelling and verification of Pthreads and Dthreads show that though Dthreads cannot prevent all the deadlock situations, shown by verification results of another 2 example programs, Dthreads is better than Pthreads on eliminating data races and preventing deadlocks. Considering limited scalability of Dthreads, we have introduced a new programming model to support coarse granularity in bank transfer. Our modelling is also extended by covering the synchronization operations in Liu et al work

    Test-Driven Development of Concurrent Programs using Concuerror

    Get PDF
    This paper advocates the test-driven development of concurrent Erlang programs in order to detect early and eliminate the vast majority of concurrency-related errors that may occur in their execution. To facilitate this task we have developed a tool, called Concuerror, that exhaustively explores process interleaving (possibly up to some preemption bound) and presents detailed interleaving information of any errors that occur. We describe in detail the use of Concuerror on a non-trivial concurrent Erlang program that we develop step by step in a test-driven fashion

    2008 Abstracts Collection -- IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science

    Get PDF
    This volume contains the proceedings of the 28th international conference on the Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2008), organized under the auspices of the Indian Association for Research in Computing Science (IARCS)

    Slicing of Concurrent Programs and its Application to Information Flow Control

    Get PDF
    This thesis presents a practical technique for information flow control for concurrent programs with threads and shared-memory communication. The technique guarantees confidentiality of information with respect to a reasonable attacker model and utilizes program dependence graphs (PDGs), a language-independent representation of information flow in a program

    Cooperative auto-tuning of parallel skeletons

    Get PDF
    Improving program performance through the use of multiple homogeneous processing elements, or cores, is common-place. However, these architectures increase the complexity required at the software level. Existing work is focused on optimising programs that run in isolation on these systems, but ignores the fact that, in reality, these systems run multiple parallel programs concurrently with programs competing for system resources. In order to improve performance in this shared environment, cooperative tuning of multiple, concurrently running parallel programs is required. Moreover, the set of programs running on the system – the system workload – is dynamic and rapidly changing. This makes cooperative tuning a challenge, as it must react rapidly to changes in the system workload. This thesis explores the scope for performance improvement from cooperatively tuning skeleton parallel programs, and techniques that can be used to cooperatively auto-tune parallel programs. Parallel skeletons provide a clear separation between algorithm description and implementation, and provide tuning knobs that the system can use to make high-level changes to a programs implementation. This work is in three parts: (i) how many threads should be allocated to each program running on the system, (ii) on which cores should a programs threads be executed and (iii) what values should be chosen for high-level parameters of the parallel skeletons. We demonstrate that significant performance improvements are available in each of these areas, compared to the current state-of-the-art

    Effective fault localization techniques for concurrent software

    Get PDF
    Multicore and Internet cloud systems have been widely adopted in recent years and have resulted in the increased development of concurrent programs. However, concurrency bugs are still difficult to test and debug for at least two reasons. Concurrent programs have large interleaving space, and concurrency bugs involve complex interactions among multiple threads. Existing testing solutions for concurrency bugs have focused on exposing concurrency bugs in the large interleaving space, but they often do not provide debugging information for developers to understand the bugs. To address the problem, this thesis proposes techniques that help developers in debugging concurrency bugs, particularly for locating the root causes and for understanding them, and presents a set of empirical user studies that evaluates the techniques. First, this thesis introduces a dynamic fault-localization technique, called Falcon, that locates single-variable concurrency bugs as memory-access patterns. Falcon uses dynamic pattern detection and statistical fault localization to report a ranked list of memory-access patterns for root causes of concurrency bugs. The overall Falcon approach is effective: in an empirical evaluation, we show that Falcon ranks program fragments corresponding to the root-cause of the concurrency bug as "most suspicious" almost always. In principle, such a ranking can save a developer's time by allowing him or her to quickly hone in on the problematic code, rather than having to sort through many reports. Others have shown that single- and multi-variable bugs cover a high fraction of all concurrency bugs that have been documented in a variety of major open-source packages; thus, being able to detect both is important. Because Falcon is limited to detecting single-variable bugs, we extend the Falcon technique to handle both single-variable and multi-variable bugs, using a unified technique, called Unicorn. Unicorn uses online memory monitoring and offline memory pattern combination to handle multi-variable concurrency bugs. The overall Unicorn approach is effective in ranking memory-access patterns for single- and multi-variable concurrency bugs. To further assist developers in understanding concurrency bugs, this thesis presents a fault-explanation technique, called Griffin, that provides more context of the root cause than Unicorn. Griffin reconstructs the root cause of the concurrency bugs by grouping suspicious memory accesses, finding suspicious method locations, and presenting calling stacks along with the buggy interleavings. By providing additional context, the overall Griffin approach can provide more information at a higher-level to the developer, allowing him or her to more readily diagnose complex bugs that may cross file or module boundaries. Finally, this thesis presents a set of empirical user studies that investigates the effectiveness of the presented techniques. In particular, the studies compare the effectiveness between a state-of-the-art debugging technique and our debugging techniques, Unicorn and Griffin. Among our findings, the user study shows that while the techniques are indistinguishable when the fault is relatively simple, Griffin is most effective for more complex faults. This observation further suggests that there may be a need for a spectrum of tools or interfaces that depend on the complexity of the underlying fault or even the background of the user.Ph.D
    • …
    corecore