17 research outputs found

    Trust in MDE Components: the DOMINO Experiment

    Get PDF
    International audienceA large number of modeling activities can be automatic or computer assisted. This automation ensures a more rapid and robust software development. However, engineers must ensure that the models have the properties required for the application. In order to tend towards this requirement, the DOMINO project (DOMaINs and methodological prOcess) proposes to use the socalled trustworthy Model-Driven Engineering (MDE) components and aims to provide a methodology for the validation and qualification of such components

    Rule-based Assessment of Test Quality.

    Full text link

    Static Analysis of Model Transformations for Effective Test Generation

    Get PDF
    International audienceModel transformations are an integral part of several computing systems that manipulate interconnected graphs of objects called models in an input domain specified by a metamodel and a set of invariants. Test models are used to look for faults in a transformation. A test model contains a specific set of objects, their interconnections and values for their attributes. Can we automatically generate an effective set of test models using knowledge from the transformation? We present a white-box testing approach that uses static analysis to guide the automatic generation of test inputs for transformations. Our static analysis uncovers knowledge about how the input model elements are accessed by transformation operations. This information is called the input metamodel footprint due to the transformation. We transform footprint, input metamodel, its invariants, and transformation pre-conditions to a constraint satisfaction problem in Alloy. We solve the problem to generate sets of test models containing traces of the footprint. Are these test models effective? With the help of a case study transformation we evaluate the effectiveness of these test inputs. We use mutation analysis to show that the test models generated from footprints are more effective (97.62% avg. mutation score) in detecting faults than previously developed approaches based on input domain coverage criteria (89.9% avg.) and unguided generation (70.1% avg.)

    Model Transformation Testing and Debugging: A Survey

    Get PDF
    Model transformations are the key technique in Model-Driven Engineering (MDE) to manipulate and construct models. As a consequence, the correctness of software systems built with MDE approaches relies mainly on the correctness of model transformations, and thus, detecting and locating bugs in model transformations have been popular research topics in recent years. This surge of work has led to a vast literature on model transformation testing and debugging, which makes it challenging to gain a comprehensive view of the current state of the art. This is an obstacle for newcomers to this topic and MDE practitioners to apply these approaches. This paper presents a survey on testing and debugging model transformations based on the analysis of \nPapers~papers on the topics. We explore the trends, advances, and evolution over the years, bringing together previously disparate streams of work and providing a comprehensive view of these thriving areas. In addition, we present a conceptual framework to understand and categorise the different proposals. Finally, we identify several open research challenges and propose specific action points for the model transformation community.This work is partially supported by the European Commission (FEDER) and Junta de Andalucia under projects APOLO (US-1264651) and EKIPMENT-PLUS (P18-FR-2895), by the Spanish Government (FEDER/Ministerio de Ciencia e Innovación – Agencia Estatal de Investigación) under projects HORATIO (RTI2018-101204-B-C21), COSCA (PGC2018-094905-B-I00) and LOCOSS (PID2020-114615RB-I00), by the Austrian Science Fund (P 28519-N31, P 30525-N31), and by the Austrian Federal Ministry for Digital and Economic Affairs and the National Foundation for Research, Technology and Development (CDG

    Featured Model-based Mutation Analysis

    Get PDF
    International audienceModel-based mutation analysis is a powerful but expensive testing technique. We tackle its high computation cost by proposing an optimization technique that drastically speeds up the mutant execution process. Central to this approach is the Featured Mutant Model, a modelling framework for mutation analysis inspired by the software product line paradigm. It uses behavioural variability models, viz., Featured Transition Systems, which enable the optimized generation, configuration and execution of mutants. We provide results, based on models with thousands of transitions, suggesting that our technique is fast and scalable. We found that it outperforms previous approaches by several orders of magnitude and that it makes higher-order mutation practically applicable

    Un entorno de pruebas de mutación en Eclipse

    Full text link
    Una técnica para medir la efectividad de un conjunto de casos de prueba y ayudar a mejorarlo son las pruebas de mutación. En las pruebas de este tipo, se insertan fallos en un programa para obtener distintas versiones erróneas del mismo. Al error introducido en el programa se le denomina mutación y a las nuevas versiones del programa inicial se les llama mutantes, y se utilizan para comprobar si, dado un conjunto de casos de prueba, éste es capaz de detectar los fallos introducidos en cada mutante. En caso de no detectarlos, el desarrollador debe proporcionar nuevos casos de prueba que detecten los mutantes creados, mejorando de este modo la calidad del conjunto de casos de prueba inicial. El objetivo de este Trabajo de Fin de Grado es construir una herramienta de mutación que genere de forma automática mutantes de un programa escrito en el lenguaje ATL (Atlas Transformation Language), para posteriormente poder medir la eficacia del conjunto de casos de prueba diseñado para probar el programa. ATL es un lenguaje de programación para definir transformaciones de modelos, que son un tipo de programa software cuyos argumentos de entrada y salida son modelos. Se utilizan dentro del paradigma de Desarrollo Dirigido por Modelos, que es un método de desarrollo de software en el que los datos que se manejan son modelos. Las mutaciones que se generan y describen en el presente trabajo están orientadas a este tipo de software.Mutation testing is a technique to measure the efficacy of a set of test cases and help to improve it. On this kind of testing, faults are injected into a program to get faulty versions of it. Errors introduced in the program are called mutations and the new versions of the original program are called mutants, and they are used to check whether, given a set of test cases, this is able to detect the faults introduced in each mutant. If they are not detected, the developer must provide new test cases that detect the created mutants, thus improving the overall quality of the initial test set. The objective of this Bachelor’s Project is to build a mutation tool to generate automatically mutants of a program written with the ATL language (Atlas Transformation Language), and measure the efficacy of a set of test cases designed for testing the program. ATL is a programming language to implement model transformations, which are programs whose input and output arguments are models. They are used in the context of Model Driven Development, which is a software development method where the manipulated data are models. The mutations generated and described on this work are oriented to this kind of software

    A Multi-Level Framework for the Detection, Prioritization and Testing of Software Design Defects

    Full text link
    Large-scale software systems exhibit high complexity and become difficult to maintain. In fact, it has been reported that software cost dedicated to maintenance and evolution activities is more than 80% of the total software costs. In particular, object-oriented software systems need to follow some traditional design principles such as data abstraction, encapsulation, and modularity. However, some of these non-functional requirements can be violated by developers for many reasons such as inexperience with object-oriented design principles, deadline stress. This high cost of maintenance activities could potentially be greatly reduced by providing automatic or semi-automatic solutions to increase system‟s comprehensibility, adaptability and extensibility to avoid bad-practices. The detection of refactoring opportunities focuses on the detection of bad smells, also called antipatterns, which have been recognized as the design situations that may cause software failures indirectly. The correction of one bad smell may influence other bad smells. Thus, the order of fixing bad smells is important to reduce the effort and maximize the refactoring benefits. However, very few studies addressed the problem of finding the optimal sequence in which the refactoring opportunities, such as bad smells, should be ordered. Few other studies tried to prioritize refactoring opportunities based on the types of bad smells to determine their severity. However, the correction of severe bad smells may require a high effort which should be optimized and the relationships between the different bad smells are not considered during the prioritization process. The main goal of this research is to help software engineers to refactor large-scale systems with a minimum effort and few interactions including the detection, management and testing of refactoring opportunities. We report the results of an empirical study with an implementation of our bi-level approach. The obtained results provide evidence to support the claim that our proposal is more efficient, on average, than existing techniques based on a benchmark of 9 open source systems and 1 industrial project. We have also evaluated the relevance and usefulness of the proposed bi-level framework for software engineers to improve the quality of their systems and support the detection of transformation errors by generating efficient test cases.Ph.D.Information Systems Engineering, College of Engineering and Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/136075/1/Dilan_Sahin_Final Dissertation.pdfDescription of Dilan_Sahin_Final Dissertation.pdf : Dissertatio

    Towards an Automation of the Mutation Analysis Dedicated to Model Transformation

    Get PDF
    International audienceA major benefit of Model Driven Engineering (MDE) relies on the automatic generation of artefacts from high-level models through intermediary levels using model transformations. In such a process, the input must be well-designed and the model transformations should be trustworthy. Due to the specificities of models and transformations, classical software test techniques have to be adapted. Among these techniques, mutation analysis has been ported and a set of mutation operators has been defined. However, mutation analysis currently requires a considerable manual work and suffers from the test data set improvement activity. This activity is seen by testers as a difficult and time-consuming job, and reduces the benefits of the mutation analysis. This paper addresses the test data set improvement activity. Model transformation traceability in conjunction with a model of mutation operators, and a dedicated algorithm allow to automatically or semi-automatically produce test models that detect new faults. The proposed approach is validated and illustrated in a case study written in Kermeta

    Estado del arte de verificación de transformación de modelos

    Get PDF
    El Desarrollo de Software Guiado por Modelos (Model-Driven Development, MDD) es un enfoque de ingeniería de software basado en el modelado de un sistema como la principal actividad del desarrollo y la construcción del mismo guiada por transformaciones de dichos modelos. Su éxito depende fuertemente de la disponibilidad de lenguajes y herramientas apropiados para realizar las transformaciones y validar su corrección. En relación a este último punto, este documento presenta un relevamiento del estado del arte de los diferentes enfoques y técnicas de verificación de transformaciones de modelos empleados para MDD. Se analizan las principales características de los enfoques existentes, a saber: basado en casos de prueba, model checking y métodos deductivos. Así mismo se estudian las diferentes técnicas existentes para cada enfoque y se presentan las herramientas utilizadas en la bibliografía, ejemplificando su uso
    corecore