601 research outputs found

    Automated Fixing of Programs with Contracts

    Full text link
    This paper describes AutoFix, an automatic debugging technique that can fix faults in general-purpose software. To provide high-quality fix suggestions and to enable automation of the whole debugging process, AutoFix relies on the presence of simple specification elements in the form of contracts (such as pre- and postconditions). Using contracts enhances the precision of dynamic analysis techniques for fault detection and localization, and for validating fixes. The only required user input to the AutoFix supporting tool is then a faulty program annotated with contracts; the tool produces a collection of validated fixes for the fault ranked according to an estimate of their suitability. In an extensive experimental evaluation, we applied AutoFix to over 200 faults in four code bases of different maturity and quality (of implementation and of contracts). AutoFix successfully fixed 42% of the faults, producing, in the majority of cases, corrections of quality comparable to those competent programmers would write; the used computational resources were modest, with an average time per fix below 20 minutes on commodity hardware. These figures compare favorably to the state of the art in automated program fixing, and demonstrate that the AutoFix approach is successfully applicable to reduce the debugging burden in real-world scenarios.Comment: Minor changes after proofreadin

    A dynamic fault localization technique with noise reduction for java programs

    Get PDF
    Existing fault localization techniques combine various program features and similarity coefficients with the aim of precisely assessing the similarities among the dynamic spectra of these program features to predict the locations of faults. Many such techniques estimate the probability of a particular program feature causing the observed failures. They ignore the noise introduced by the other features on the same set of executions that may lead to the observed failures. In this paper, we propose both the use of chains of key basic blocks as program features and an innovative similarity coefficient that has noise reduction effect. We have implemented our proposal in a technique known as MKBC. We have empirically evaluated MKBC using three real-life medium-sized programs with real faults. The results show that MKBC outperforms Tarantula, Jaccard, SBI, and Ochiai significantly. © 2011 IEEE.published_or_final_versionThe 11th International Conference on Quality Software (QSIC 2011), Madrid, Spain, 13-14 July 2011. In International Conference on Quality Software Proceedings, 2011, p. 11-2

    Augmenting Bottom-Up Metamodels with Predicates

    Get PDF
    Metamodeling refers to modeling a model. There are two metamodeling approaches for ABMs: (1) top-down and (2) bottom-up. The top down approach enables users to decompose high-level mental models into behaviors and interactions of agents. In contrast, the bottom-up approach constructs a relatively small, simple model that approximates the structure and outcomes of a dataset gathered fromthe runs of an ABM. The bottom-up metamodel makes behavior of the ABM comprehensible and exploratory analyses feasible. Formost users the construction of a bottom-up metamodel entails: (1) creating an experimental design, (2) running the simulation for all cases specified by the design, (3) collecting the inputs and output in a dataset and (4) applying first-order regression analysis to find a model that effectively estimates the output. Unfortunately, the sums of input variables employed by first-order regression analysis give the impression that one can compensate for one component of the system by improving some other component even if such substitution is inadequate or invalid. As a result the metamodel can be misleading. We address these deficiencies with an approach that: (1) automatically generates Boolean conditions that highlight when substitutions and tradeoffs among variables are valid and (2) augments the bottom-up metamodel with the conditions to improve validity and accuracy. We evaluate our approach using several established agent-based simulations

    Precise propagation of fault-failure correlations in program flow graphs

    Get PDF
    Statistical fault localization techniques find suspicious faulty program entities in programs by comparing passed and failed executions. Existing studies show that such techniques can be promising in locating program faults. However, coincidental correctness and execution crashes may make program entities indistinguishable in the execution spectra under study, or cause inaccurate counting, thus severely affecting the precision of existing fault localization techniques. In this paper, we propose a BlockRank technique, which calculates, contrasts, and propagates the mean edge profiles between passed and failed executions to alleviate the impact of coincidental correctness. To address the issue of execution crashes, Block-Rank identifies suspicious basic blocks by modeling how each basic block contributes to failures by apportioning their fault relevance to surrounding basic blocks in terms of the rate of successful transition observed from passed and failed executions. BlockRank is empirically shown to be more effective than nine representative techniques on four real-life medium-sized programs. © 2011 IEEE.published_or_final_versionProceedings of the 35th IEEE Annual International Computer Software and Applications Conference (COMPSAC 2011), Munich, Germany, 18-22 July 2011, p. 58-6

    A Generative Model of the Mutual Escalation of Anxiety Between Religious Groups

    Get PDF
    We propose a generative agent-based model of the emergence and escalation of xenophobic anxiety in which individuals from two different religious groups encounter various hazards within an artificial society. The architecture of the model is informed by several empirically validated theories about the role of religion in intergroup conflict. Our results identify some of the conditions and mechanisms that engender the intensification of anxiety within and between religious groups. We define mutually escalating xenophobic anxiety as the increase of the average level of anxiety of the agents in both groups overtime. Trace validation techniques show that the most common conditions under which longer periods of mutually escalating xenophobic anxiety occur are those in which the difference in the size of the groups is not too large and the agents experience social and contagion hazards at a level of intensity that meets or exceeds their thresholds for those hazards. Under these conditions agents will encounter out-group members more regularly, and perceive them as threats, generating mutually escalating xenophobic anxiety. The model\u27s capacity to grow the macro-level emergence of this phenomenon from micro-level agent behaviors and interactions provides the foundation for future work in this domain
    • …
    corecore