68 research outputs found

    Exploiting model morphology for event-based testing

    Get PDF
    Model-based testing employs models for testing. Model-based mutation testing (MBMT) additionally involves fault models, called mutants, by applying mutation operators to the original model. A problem encountered with MBMT is the elimination of equivalent mutants and multiple mutants modeling the same faults. Another problem is the need to compare a mutant to the original model for test generation. This paper proposes an event-based approach to MBMT that is not fixed on single events and a single model but rather operates on sequences of events of length k ≥ 1 and invokes a sequence of models that are derived from the original one by varying its morphology based on k. The approach employs formal grammars, related mutation operators, and algorithms to generate test cases, enabling the following: (1) the exclusion of equivalent mutants and multiple mutants; (2) the generation of a test case in linear time to kill a selected mutant without comparing it to the original model; (3) the analysis of morphologically different models enabling the systematic generation of mutants, thereby extending the set of fault models studied in related literature. Three case studies validate the approach and analyze its characteristics in comparison to random testing and another MBMT approach

    Quantitative metrics for mutation testing

    Get PDF
    Program mutation is the process of generating versions of a base program by applying elementary syntactic modifications; this technique has been used in program testing in a variety of applications, most notably to assess the quality of a test data set. A good test set will discover the difference between the original program and mutant except if the mutant is semantically equivalent to the original program, despite being syntactically distinct. Equivalent mutants are a major nuisance in the practice of mutation testing, because they introduce a significant amount of bias and uncertainty in the analysis of test results; indeed, mutants are useful only to the extent that they define distinct functions from the base program. Yet, despite several decades of research, the identification of equivalent mutants remains a tedious, inefficient, ineffective and error prone process. The approach that is adopted in this dissertation is to turn away from the goal of identifying individual mutants which are semantically equivalent to the base program, in favor of an approach that merely focuses on estimating their number. To this effect, the following question is considered: what makes a base program P prone to produce equivalent mutants? The position taken in this work is that what makes a program prone to generate equivalent mutants is the same property that makes a program fault tolerant, since fault tolerance is by definition the ability to maintain correct behavior despite the presence and sensitization of faults; whether these faults stem from poor design or from mutation operators does not matter. Hence if we could only quantify the redundancy of a program, we should be able to use the redundancy metrics to estimate the ratio of equivalent mutants (REM for short) of a program. Using redundancy metrics that were previously defined to reflect the state redundancy of a program, its functional redundancy, its non injectivity and its non-determinacy, this dissertation makes the following contributions: The design and implementation of a Java compiler, using compiler generation technology, to analyze Java code and compute its redundancy metrics. An empirical study on standard mutation testing benchmarks to analyze the statistical relationships between the REM of a program and its redundancy metrics. The derivation of regression models to estimate the REM of a program from its compiler generated redundancy metrics, for a variety of mutation policies. The use of the REM to address a number of mutation related issues, including: estimating the level of redundancy between non-equivalent mutants; redefining the mutation score of a test data set to take into account the possibility that mutants may be semantically equivalent to each other; using the REM to derive a minimal set of mutants without having to analyze all the pairs of mutants for equivalence. The main conclusions of this work are the following: The REM plays a very important role in the mutation analysis of a program, as it gives many useful insights into the properties of its mutants. All the attributes that can be computed from the REM of a program are very sensitive to the exact value of the REM; Hence the REM must be estimated with great precision. Consequently, the focus of future research is to revisit the Java compiler and enhance the precision of its estimation of redundancy metrics, and to revisit the regression models accordingly

    Reversible Computation: Extending Horizons of Computing

    Get PDF
    This open access State-of-the-Art Survey presents the main recent scientific outcomes in the area of reversible computation, focusing on those that have emerged during COST Action IC1405 "Reversible Computation - Extending Horizons of Computing", a European research network that operated from May 2015 to April 2019. Reversible computation is a new paradigm that extends the traditional forwards-only mode of computation with the ability to execute in reverse, so that computation can run backwards as easily and naturally as forwards. It aims to deliver novel computing devices and software, and to enhance existing systems by equipping them with reversibility. There are many potential applications of reversible computation, including languages and software tools for reliable and recovery-oriented distributed systems and revolutionary reversible logic gates and circuits, but they can only be realized and have lasting effect if conceptual and firm theoretical foundations are established first

    Reversible Computation: Extending Horizons of Computing

    Get PDF
    This open access State-of-the-Art Survey presents the main recent scientific outcomes in the area of reversible computation, focusing on those that have emerged during COST Action IC1405 "Reversible Computation - Extending Horizons of Computing", a European research network that operated from May 2015 to April 2019. Reversible computation is a new paradigm that extends the traditional forwards-only mode of computation with the ability to execute in reverse, so that computation can run backwards as easily and naturally as forwards. It aims to deliver novel computing devices and software, and to enhance existing systems by equipping them with reversibility. There are many potential applications of reversible computation, including languages and software tools for reliable and recovery-oriented distributed systems and revolutionary reversible logic gates and circuits, but they can only be realized and have lasting effect if conceptual and firm theoretical foundations are established first

    Acta Cybernetica : Tomus 2. Fasciculus 2.

    Get PDF

    A hierarchy of languages, logics, and mathematical theories

    Get PDF
    We present mathematics from a foundational perspective as a hierarchy in which each tier consists of a language, a logic, and a mathematical theory. Each tier in the hierarchy subsumes all preceding tiers in the sense that its language, logic, and mathematical theory generalize all preceding languages, logics, and mathematical theories. Starting from the root tier, the mathematical theories in this hierarchy are: combinatory logic restricted to the identity I, combinatory logic, ZFC set theory, constructive type theory, and category theory. The languages of the first four tiers correspond to the languages of the Chomsky hierarchy: in combinatory logic Ix = x gives rise to a regular language; the language generated by S, K in combinatory logic is context-free; first-order logic is context-sensitive; and the typed lambda calculus of type theory is recursively enumerable. The logic of each tier can be characterized in terms of the cardinality of the set of its truth values: combinatory logic restricted to I has 0 truth values, while combinatory logic has 1, first-order logic 2, constructive type theory 3, and categeory theory omega_0. We conjecture that the cardinality of objects whose existence can be established in each tier is bounded; for example, combinatory logic is bounded in this sense by omega_0 and ZFC set theory by the least inaccessible cardinal. We also show that classical recursion theory presents a framework for generating the above hierarchy in terms of the initial functions zero, projection, and successor followed by composition and m-recursion, starting with the zero function I in combinatory logic This paper begins with a theory of glossogenesis, i.e. a theory of the origin of language, since this theory shows that natural language has deep connections to category theory and since it was through these connections that the last tier and ultimately the whole hierarchy were discovered. The discussion covers implications of the hierarchy for mathematics, physics, cosmology, theology, linguistics, extraterrestrial communication, and artificial intelligence

    Mutation Testing Advances: An Analysis and Survey

    Get PDF

    Static Analysis for ECMAScript String Manipulation Programs

    Get PDF
    In recent years, dynamic languages, such as JavaScript or Python, have been increasingly used in a wide range of fields and applications. Their tricky and misunderstood behaviors pose a great challenge for static analysis of these languages. A key aspect of any dynamic language program is the multiple usage of strings, since they can be implicitly converted to another type value, transformed by string-to-code primitives or used to access an object-property. Unfortunately, string analyses for dynamic languages still lack precision and do not take into account some important string features. In this scenario, more precise string analyses become a necessity. The goal of this paper is to place a first step for precisely handling dynamic language string features. In particular, we propose a new abstract domain approximating strings as finite state automata and an abstract interpretation-based static analysis for the most common string manipulating operations provided by the ECMAScript specification. The proposed analysis comes with a prototype static analyzer implementation for an imperative string manipulating language, allowing us to show and evaluate the improved precision of the proposed analysis
    • …
    corecore