10,669 research outputs found

    Locating Faults with Program Slicing: An Empirical Analysis

    Get PDF
    Statistical fault localization is an easily deployed technique for quickly determining candidates for faulty code locations. If a human programmer has to search the fault beyond the top candidate locations, though, more traditional techniques of following dependencies along dynamic slices may be better suited. In a large study of 457 bugs (369 single faults and 88 multiple faults) in 46 open source C programs, we compare the effectiveness of statistical fault localization against dynamic slicing. For single faults, we find that dynamic slicing was eight percentage points more effective than the best performing statistical debugging formula; for 66% of the bugs, dynamic slicing finds the fault earlier than the best performing statistical debugging formula. In our evaluation, dynamic slicing is more effective for programs with single fault, but statistical debugging performs better on multiple faults. Best results, however, are obtained by a hybrid approach: If programmers first examine at most the top five most suspicious locations from statistical debugging, and then switch to dynamic slices, on average, they will need to examine 15% (30 lines) of the code. These findings hold for 18 most effective statistical debugging formulas and our results are independent of the number of faults (i.e. single or multiple faults) and error type (i.e. artificial or real errors)

    Order and disorder in everyday action: the roles of contention scheduling and supervisory attention

    Get PDF
    This paper describes the contention scheduling/supervisory attentional system approach to action selection and uses this account to structure a survey of current theories of the control of action. The focus is on how such theories account for the types of error produced by some patients with frontal and/or left temporoparietal damage when attempting everyday tasks. Four issues, concerning both the theories and their accounts of everyday action breakdown, emerge: first, whether multiple control systems, each capable of controlling action in different situations, exist; second, whether different forms of damage at the neural level result in conceptually distinct disorders; third, whether semantic/conceptual knowledge of objects and actions can be dissociated from control mechanisms, and if so what computational principles govern sequential control; and fourth, whether disorders of everyday action should be attributed to a loss of semantic/conceptual knowledge, a malfunction of control, or some combination of the two

    Spectrum-Based Fault Localization in Model Transformations

    Get PDF
    Model transformations play a cornerstone role in Model-Driven Engineering (MDE), as they provide the essential mechanisms for manipulating and transforming models. The correctness of software built using MDE techniques greatly relies on the correctness of model transformations. However, it is challenging and error prone to debug them, and the situation gets more critical as the size and complexity of model transformations grow, where manual debugging is no longer possible. Spectrum-Based Fault Localization (SBFL) uses the results of test cases and their corresponding code coverage information to estimate the likelihood of each program component (e.g., statements) of being faulty. In this article we present an approach to apply SBFL for locating the faulty rules in model transformations. We evaluate the feasibility and accuracy of the approach by comparing the effectiveness of 18 different stateof- the-art SBFL techniques at locating faults in model transformations. Evaluation results revealed that the best techniques, namely Kulcynski2, Mountford, Ochiai, and Zoltar, lead the debugger to inspect a maximum of three rules to locate the bug in around 74% of the cases. Furthermore, we compare our approach with a static approach for fault localization in model transformations, observing a clear superiority of the proposed SBFL-based method.Comisión Interministerial de Ciencia y Tecnología TIN2015-70560-RJunta de Andalucía P12-TIC-186

    An Exploratory Study of Field Failures

    Get PDF
    Field failures, that is, failures caused by faults that escape the testing phase leading to failures in the field, are unavoidable. Improving verification and validation activities before deployment can identify and timely remove many but not all faults, and users may still experience a number of annoying problems while using their software systems. This paper investigates the nature of field failures, to understand to what extent further improving in-house verification and validation activities can reduce the number of failures in the field, and frames the need of new approaches that operate in the field. We report the results of the analysis of the bug reports of five applications belonging to three different ecosystems, propose a taxonomy of field failures, and discuss the reasons why failures belonging to the identified classes cannot be detected at design time but shall be addressed at runtime. We observe that many faults (70%) are intrinsically hard to detect at design-time

    An Exploratory Study of Field Failures

    Full text link
    Field failures, that is, failures caused by faults that escape the testing phase leading to failures in the field, are unavoidable. Improving verification and validation activities before deployment can identify and timely remove many but not all faults, and users may still experience a number of annoying problems while using their software systems. This paper investigates the nature of field failures, to understand to what extent further improving in-house verification and validation activities can reduce the number of failures in the field, and frames the need of new approaches that operate in the field. We report the results of the analysis of the bug reports of five applications belonging to three different ecosystems, propose a taxonomy of field failures, and discuss the reasons why failures belonging to the identified classes cannot be detected at design time but shall be addressed at runtime. We observe that many faults (70%) are intrinsically hard to detect at design-time

    Human Error Management Paying Emphasis on Decision Making and Social Intelligence -Beyond the Framework of Man-Machine Interface Design-

    Get PDF
    How latent error or violation induces a serious accident has been reviewed and a proper addressing measure of this has been proposed in the framework of decision making, emotional intelligence (EI) and social intelligence (SI) of organization and its members. It has been clarified that EI and SI play an important role in decision making. Violations frequently occur all over the world, although we definitely understand that we should not commit violations, and a secret to prevent this might exist in the enhancement of both social intelligence and reliability. The construction of social structure or system that supports organizational efforts to enhance both social intelligence and reliability would be essential. Traditional safety education emphasizes that it is possible to change attitudes or mind toward safety by means of education. In spite of this,accidents or scandals frequently occur and never decrease. These problems must be approached on the basis of the full understanding of social intelligence and limited reasonability in decision making. Social dilemma (We do not necessarily cooperate in spite of understanding its importance, and we sometimes make decision not to select cooperative behavior. Non-cooperation gives rise to a desirable result for an individual. However, if all take non-cooperative actions, undesirable results are finally induced to all.) must be solved in some ways and the transition from relief (closed) society to global (reliability) society must be realized as a whole. New social system, where cooperative relation can be easily and reliably obtained, must be constructed to support such an approach and prevent violation-based accidents

    Scalable dynamic information flow tracking and its applications

    Get PDF
    We are designing scalable dynamic information flow tracking techniques and employing them to carry out tasks related to debugging (bug location and fault avoidance), security (software attack detection), and data validation (lineage tracing of scientific data). The focus of our ongoing work is on developing online dynamic analysis techniques for long running multithreaded programs that may be executed on a single core or on multiple cores to exploit thread level parallelism. 1. 1

    Empirical Validation Of Requirement Error Abstraction And Classification: A Multidisciplinary Approach

    Get PDF
    Software quality and reliability is a primary concern for successful development organizations. Over the years, researchers have focused on monitoring and controlling quality throughout the software process by helping developers to detect as many faults as possible using different fault based techniques. This thesis analyzed the software quality problem from a different perspective by taking a step back from faults to abstract the fundamental causes of faults. The first step in this direction is developing a process of abstracting errors from faults throughout the software process. I have described the error abstraction process (EAP) and used it to develop error taxonomy for the requirement stage. This thesis presents the results of a study, which uses techniques based on an error abstraction process and investigates its application to requirement documents. The initial results show promise and provide some useful insights. These results are important for our further investigation
    • …
    corecore