9 research outputs found

    Groundwork for the Development of Testing Plans for Concurrent Software

    Get PDF
    While multi-threading has become commonplace in many application domains (e.g., embedded systems, digital signal processing (DSP), networks, IP services, and graphics), multi-threaded code often requires complex co-ordination of threads. As a result, multi-threaded implementations are prone to subtle bugs that are difficult and time-consuming to locate. Moreover, current testing techniques that address multi-threading are generally costly while their effectiveness is unknown. The development of cost-effective testing plans requires an in-depth study of the nature, frequency, and cost of concurrency errors in the context of real-world applications. The full paper will lay the groundwork for such a study, with the purpose of informing the creation of a parametric cost model for testing multi-threaded software. The current version of the paper provides motivation for the study, an outline of the full paper, and a bibliography of related papers

    Answer-set programming as a new approach to event-sequence testing

    Get PDF
    In many applications, faults are triggered by events that occur in a particular order. Based on the assumption that most bugs are caused by the interaction of a low number of events, Kuhn et al. recently introduced sequence covering arrays (SCAs) as suitable designs for event sequence testing. In practice, directly applying SCAs for testing is often impaired by additional constraints, and SCAs have to be adapted to fit application-specific needs. Modifying precomputed SCAs to account for problem variations can be problematic, if not impossible, and developing dedicated algorithms is costly. In this paper, we propose answer-set programming (ASP), a well-known knowledge-representation formalism from the area of artificial intelligence based on logic programming, as a declarative paradigm for computing SCAs. Our approach allows to concisely state complex coverage criteria in an elaboration tolerant way, i.e., small variations of a problem specification require only small modifications of the ASP representation

    CONTEXT-AWARE DEBUGGING FOR CONCURRENT PROGRAMS

    Get PDF
    Concurrency faults are difficult to reproduce and localize because they usually occur under specific inputs and thread interleavings. Most existing fault localization techniques focus on sequential programs but fail to identify faulty memory access patterns across threads, which are usually the root causes of concurrency faults. Moreover, existing techniques for sequential programs cannot be adapted to identify faulty paths in concurrent programs. While concurrency fault localization techniques have been proposed to analyze passing and failing executions obtained from running a set of test cases to identify faulty access patterns, they primarily focus on using statistical analysis. We present a novel approach to fault localization using feature selection techniques from machine learning. Our insight is that the concurrency access patterns obtained from a large volume of coverage data generally constitute high dimensional data sets, yet existing statistical analysis techniques for fault localization are usually applied to low dimensional data sets. Each additional failing or passing run can provide more diverse information, which can help localize faulty concurrency access patterns in code. The patterns with maximum feature diversity information can point to the most suspicious pattern. We then apply data mining technique and identify the interleaving patterns that are occurred most frequently and provide the possible faulty paths. We also evaluate the effectiveness of fault localization using test suites generated from different test adequacy criteria. We have evaluated Cadeco on 10 real-world multi-threaded Java applications. Results indicate that Cadeco outperforms state-of-the-art approaches for localizing concurrency faults

    Abstraction and mining of traces to explain concurrency bugs

    Get PDF

    Generating Unit Tests for Concurrent Classes

    Full text link
    Abstract—As computers become more and more powerful, programs are increasingly split up into multiple threads to leverage the power of multi-core CPUs. However, writing cor-rect multi-threaded code is a hard problem, as the programmer has to ensure that all access to shared data is coordinated. Existing automated testing tools for multi-threaded code mainly focus on re-executing existing test cases with different sched-ules. In this paper, we introduce a novel coverage criterion that enforces concurrent execution of combinations of shared memory access points with different schedules, and present an approach that automatically generates test cases for this coverage criterion. Our CONSUITE prototype demonstrates that this approach can reliably reproduce known concurrency errors, and evaluation on nine complex open source classes revealed three previously unknown data-races. Keywords-concurrency coverage; search based software en-gineering; unit testing I

    効率良いテストケース生成による並行処理プログラムのデバッグとテスト

    Get PDF
    Debugging multi-threaded concurrent programs is more difficult than sequential programs because errors are not always reproducible. Re-executing or instrumenting a concurrent program for tracing might change the execution timing and might cause the concurrent program to take a different execution path. In other words, the exact timing that caused the error is unknown. In order to reproduce the error, one needs to execute the concurrent program with the same input values many times as test cases by changing interleavings, but it is not always feasible to test them all. This dissertation proposes a debugging/testing system that generates all possible executions as test cases based on the limited information obtained from an execution trace, and then detects potential race conditions caused by different schedules and interrupt timings on a concurrent multi-threaded program. There are a number of studies about test cases reduction using partial order reduction, but there are still redundancies for the purpose of checking race conditions. The objective is to efficiently reproduce concurrent errors, specifically race conditions, by proposing three methods. The first is to reduce the numbers of interleavings to be tested. This is achieved by reducing redundant test cases and eliminating infeasible ones. The originality of the proposed method is to exploit the nature of branch coverage and utilize data flows from the trace information to identify only those interleavings that affect branch outcomes, whereas existing methods try to identify all the interleavings which may affect shared variables. Since the execution paths with the same branch outcomes would have equivalent sequences of lock/unlock and read/write operations to shared variables, they can be grouped together in the same “race-equivalent” group. In order to reduce the task for reproducing race conditions, it is sufficient to check only one member of the group. In this way, the proposed method can significantly reduces the number of interleavings for testing while still capable of detecting the same race conditions. Furthermore, the proposed method extends the existing model of execution trace to identify and avoid generating infeasible interleavings due to dependency caused by lock/unlock and wait/notify mechanisms. Experimental results suggest that redundant interleavings can be identified and removed which leads to a significant reduction of test cases. We evaluated the proposed method against several concurrent Java programs. The experimental results for an open source program Apache Commons Pool show the number of test cases is reduced from 23, which is based on the existing Thread-Pair-Interleaving method (TPAIR), to only 2 by the proposed method. Moreover, for concurrent programs that contain infinite loops, the proposed method generates only a finite and very few numbers of test cases, while many existing methods generate an infinite number of test cases. The second is to reduce the memory space required for generating test cases. Redundant test cases were still generated by the existing reachability testing method even though there was no need to execute them. Here, we propose a new method by analyzing data dependency to generate only those test cases that might affect sequences of lock/unlock and read/write operations to shared variables. The experimental results for the Apache Commons Pool show that the size of the graph for creating the test cases is reduced from 990 nodes, as based on the reachability testing method used in our previous work, to only 4 nodes by our new method. The third improvement is to reduce the effort involved in checking race conditions by utilizing previous test results. Existing work requires checking race conditions in the whole execution trace for every new test case. The proposed method can identify only those parts of the execution trace in which the sequence of lock/unlock and read/write operations to shared variables might be affected by a new test case, thus necessitating that race conditions be rechecked only for those affected parts. From the new improvements introduced above, the proposed methods accomplish to significantly reduce the efforts for exhaustively checking all possible interleavings. The proposed methods provide programmers the information regarding whether there exist program errors caused by interleavings, the interleaving (path) when the errors occurred, and accesses to shared variables with inconsistent locking.電気通信大学201

    Finding and Tolerating Concurrency Bugs.

    Full text link
    Shared-memory multi-threaded programming is inherently more difficult than single-threaded programming. The main source of complexity is that, the threads of an application can interleave in so many different ways. To ensure correctness, a programmer has to test all possible thread interleavings, which, however, is impractical. Many rare thread interleavings remain untested in production systems, and they are the major cause for a majority of concurrency bugs. Given that untested interleavings are the major cause of a majority of the concurrency bugs, this dissertation explores two possible ways to tackle concurrency bugs in this dissertation. One is to expose untested interleavings during testing to find concurrency bugs. The other is to avoid untested interleavings during production runs to tolerate concurrency bugs. The key is an efficient and effective way to encode and remember tested interleavings. This dissertation first discusses two hypotheses about concurrency bugs: the small scope hypothesis and the value independent hypothesis. Based on these two hypotheses, this dissertation defines a set of interleaving patterns, called interleaving idioms, which are used to encode tested interleavings. The empirical analysis shows that the idiom based interleaving encoding scheme is able to represent most of the concurrency bugs that are used in the study. Then, this dissertation discusses an open source testing tool called Maple. It memoizes tested interleavings and actively seeks to expose untested interleavings. The results show that Maple is able to expose concurrency bugs and expose interleavings faster than other conventional testing techniques. Finally, this dissertation discusses two parallel runtime system designs which seek to avoid untested interleavings during production runs to tolerate concurrency bugs. Avoiding untested interleavings significantly improve correctness because most of the concurrency bugs are caused by untested interleavings. Also, the performance overhead for disallowing untested interleavings is low as commonly occuring interleavings should have been tested in a well-tested program.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/99765/1/jieyu_1.pd
    corecore