4,762 research outputs found

    Cause Clue Clauses: Error Localization using Maximum Satisfiability

    Full text link
    Much effort is spent everyday by programmers in trying to reduce long, failing execution traces to the cause of the error. We present a new algorithm for error cause localization based on a reduction to the maximal satisfiability problem (MAX-SAT), which asks what is the maximum number of clauses of a Boolean formula that can be simultaneously satisfied by an assignment. At an intuitive level, our algorithm takes as input a program and a failing test, and comprises the following three steps. First, using symbolic execution, we encode a trace of a program as a Boolean trace formula which is satisfiable iff the trace is feasible. Second, for a failing program execution (e.g., one that violates an assertion or a post-condition), we construct an unsatisfiable formula by taking the trace formula and additionally asserting that the input is the failing test and that the assertion condition does hold at the end. Third, using MAX-SAT, we find a maximal set of clauses in this formula that can be satisfied together, and output the complement set as a potential cause of the error. We have implemented our algorithm in a tool called bug-assist for C programs. We demonstrate the surprising effectiveness of the tool on a set of benchmark examples with injected faults, and show that in most cases, bug-assist can quickly and precisely isolate the exact few lines of code whose change eliminates the error. We also demonstrate how our algorithm can be modified to automatically suggest fixes for common classes of errors such as off-by-one.Comment: The pre-alpha version of the tool can be downloaded from http://bugassist.mpi-sws.or

    Fault Localization in Multi-Threaded C Programs using Bounded Model Checking (extended version)

    Full text link
    Software debugging is a very time-consuming process, which is even worse for multi-threaded programs, due to the non-deterministic behavior of thread-scheduling algorithms. However, the debugging time may be greatly reduced, if automatic methods are used for localizing faults. In this study, a new method for fault localization, in multi-threaded C programs, is proposed. It transforms a multi-threaded program into a corresponding sequential one and then uses a fault-diagnosis method suitable for this type of program, in order to localize faults. The code transformation is implemented with rules and context switch information from counterexamples, which are typically generated by bounded model checkers. Experimental results show that the proposed method is effective, in such a way that sequential fault-localization methods can be extended to multi-threaded programs.Comment: extended version of paper published at SBESC'1

    Semi-automatic fault localization

    Get PDF
    One of the most expensive and time-consuming components of the debugging process is locating the errors or faults. To locate faults, developers must identify statements involved in failures and select suspicious statements that might contain faults. In practice, this localization is done by developers in a tedious and manual way, using only a single execution, targeting only one fault, and having a limited perspective into a large search space. The thesis of this research is that fault localization can be partially automated with the use of commonly available dynamic information gathered from test-case executions in a way that is eļ¬€ective, eļ¬ƒcient, tolerant of test cases that pass but also execute the fault, and scalable to large programs that potentially contain multiple faults. The overall goal of this research is to develop eļ¬€ective and eļ¬ƒcient fault localization techniques that scale to programs of large size and with multiple faults. There are three principle steps performed to reach this goal: (1) Develop practical techniques for locating suspicious regions in a program; (2) Develop techniques to partition test suites into smaller, specialized test suites to target speciļ¬c faults; and (3) Evaluate the usefulness and cost of these techniques. In this dissertation, the diļ¬ƒculties and limitations of previous work in the area of fault-localization are explored. A technique, called Tarantula, is presented that addresses these diļ¬ƒculties. Empirical evaluation of the Tarantula technique shows that it is eļ¬ƒcient and eļ¬€ective for many faults. The evaluation also demonstrates that the Tarantula technique can loose eļ¬€ectiveness as the number of faults increases. To address the loss of eļ¬€ectiveness for programs with multiple faults, supporting techniques have been developed and are presented. The empirical evaluation of these supporting techniques demonstrates that they can enable eļ¬€ective fault localization in the presence of multiple faults. A new mode of debugging, called parallel debugging, is developed and empirical evidence demonstrates that it can provide a savings in terms of both total expense and time to delivery. A prototype visualization is provided to display the fault-localization results as well as to provide a method to interact and explore those results. Finally, a study on the eļ¬€ects of the composition of test suites on fault-localization is presented.Ph.D.Committee Chair: Harrold, Mary Jean; Committee Member: Orso, Alessandro; Committee Member: Pande, Santosh; Committee Member: Reiss, Steven; Committee Member: Rugaber, Spence

    Rightsizing LISA

    Get PDF
    The LISA science requirements and conceptual design have been fairly stable for over a decade. In the interest of reducing costs, the LISA Project at NASA has looked for simplifications of the architecture, at downsizing of subsystems, and at descopes of the entire mission. This is a natural activity of the formulation phase, and one that is particularly timely in the current NASA budgetary context. There is, and will continue to be, enormous pressure for cost reduction from both ESA and NASA, reviewers and the broader research community. Here, the rationale for the baseline architecture is reviewed, and recent efforts to find simplifications and other reductions that might lead to savings are reported. A few possible simplifications have been found in the LISA baseline architecture. In the interest of exploring cost sensitivity, one moderate and one aggressive descope have been evaluated; the cost savings are modest and the loss of science is not.Comment: To be published in Classical and Quantum Gravity; Proceedings of the Seventh International LISA Symposium, Barcelona, Spain, 16-20 Jun. 2008; 10 pages, 1 figure, 3 table

    On practical adequate test suites for integrated test case prioritization and fault localization

    Get PDF
    An effective integration between testing and debugging should address how well testing and fault localization can work together productively. In this paper, we report an empirical study on the effectiveness of using adequate test suites for fault localization. We also investigate the integration of test case prioritization and statistical fault localization with a postmortem analysis approach. Our results on 16 test case prioritization techniques and four statistical fault localization techniques show that, although much advancement has been made in the last decade, test adequacy criteria are still insufficient in supporting effective fault localization. We also find that the use of branch-adequate test suites is more likely than statement-adequate test suites in the effective support of statistical fault localization. Ā© 2011 IEEE.published_or_final_versionThe 11th International Conference on Quality Software (QSIC 2011), Madrid, Spain, 13-14 July 2011. In International Conference on Quality Software Proceedings, 2011, p. 21-3
    • ā€¦
    corecore