28 research outputs found

    Using hardware performance counters for fault localization

    Get PDF
    In this work, we leverage hardware performance counters-collected data as abstraction mechanisms for program executions and use these abstractions to identify likely causes of failures. Our approach can be summarized as follows: Hardware counters-based data is collected from both successful and failed executions, the data collected from the successful executions is used to create normal behavior models of programs, and deviations from these models observed in failed executions are scored and reported as likely causes of failures. The results of our experiments conducted on three open source projects suggest that the proposed approach can effectively prioritize the space of likely causes of failures, which can in turn improve the turn around time for defect fixes

    Identifying Patch Correctness in Test-Based Program Repair

    Full text link
    Test-based automatic program repair has attracted a lot of attention in recent years. However, the test suites in practice are often too weak to guarantee correctness and existing approaches often generate a large number of incorrect patches. To reduce the number of incorrect patches generated, we propose a novel approach that heuristically determines the correctness of the generated patches. The core idea is to exploit the behavior similarity of test case executions. The passing tests on original and patched programs are likely to behave similarly while the failing tests on original and patched programs are likely to behave differently. Also, if two tests exhibit similar runtime behavior, the two tests are likely to have the same test results. Based on these observations, we generate new test inputs to enhance the test suites and use their behavior similarity to determine patch correctness. Our approach is evaluated on a dataset consisting of 139 patches generated from existing program repair systems including jGenProg, Nopol, jKali, ACS and HDRepair. Our approach successfully prevented 56.3\% of the incorrect patches to be generated, without blocking any correct patches.Comment: ICSE 201

    Enhancing POI Testing Approach through the Use of Additional Information

    Full text link
    [EN] Recently, a new approach to perform regression testing has been defined: the point of interest (POI) testing. A POI, in this context, is any expression of a program. The approach receives as input a set of relations between POIs from a version of a program and POIs from another version, and also a sequence of entry points, i.e. test cases. Then, a program instrumentation, an input test case generation and different comparison functions are used to obtain the final report which indicates whether the alternative version of the program behaves as expected, e.g. it produces the same outputs or it uses less CPU/memory. In this paper, we present a method to improve POI testing by including additional context information for a certain type of POIs. Concretely, we use this method to obtain an enhanced tracing of calls. Additionally, it enables new comparison modes and a categorization of unexpected behaviours.This work has been partially supported by MINECO/AEI/FEDER (EU) under grant TIN2016-76843-C4-1-R, and by the Generalitat Valenciana under grant PROMETEOII/2015/013 (SmartLogic). Salvador Tamarit was partially supported by the Conselleria de Educación, Investigación, Cultura y Deporte de la Generalitat Valenciana under grant APOSTD/2016/036.Pérez-Rubio, S.; Tamarit Muñoz, S. (2019). Enhancing POI Testing Approach through the Use of Additional Information. Lecture Notes in Computer Science. 11285:74-90. https://doi.org/10.1007/978-3-030-16202-3_5S74901128
    corecore