2,490 research outputs found

    Spectrum-Based Fault Localization in Model Transformations

    Get PDF
    Model transformations play a cornerstone role in Model-Driven Engineering (MDE), as they provide the essential mechanisms for manipulating and transforming models. The correctness of software built using MDE techniques greatly relies on the correctness of model transformations. However, it is challenging and error prone to debug them, and the situation gets more critical as the size and complexity of model transformations grow, where manual debugging is no longer possible. Spectrum-Based Fault Localization (SBFL) uses the results of test cases and their corresponding code coverage information to estimate the likelihood of each program component (e.g., statements) of being faulty. In this article we present an approach to apply SBFL for locating the faulty rules in model transformations. We evaluate the feasibility and accuracy of the approach by comparing the effectiveness of 18 different stateof- the-art SBFL techniques at locating faults in model transformations. Evaluation results revealed that the best techniques, namely Kulcynski2, Mountford, Ochiai, and Zoltar, lead the debugger to inspect a maximum of three rules to locate the bug in around 74% of the cases. Furthermore, we compare our approach with a static approach for fault localization in model transformations, observing a clear superiority of the proposed SBFL-based method.Comisión Interministerial de Ciencia y Tecnología TIN2015-70560-RJunta de Andalucía P12-TIC-186

    Traceability for Mutation Analysis in Model Transformation

    Get PDF
    International audienceModel transformation can't be directly tested using program techniques. Those have to be adapted to model characteristics. In this paper we focus on one test technique: mutation analysis. This technique aims to qualify a test data set by analyzing the execution results of intentionally faulty program versions. If the degree of qualification is not satisfactory, the test data set has to be improved. In the context of model, this step is currently relatively fastidious and manually performed. We propose an approach based on traceability mechanisms in order to ease the test model set improvement in the mutation analysis process. We illustrate with a benchmark the quick automatic identification of the input model to change. A new model is then created in order to raise the quality of the test data set

    A Computational Architecture Based on RFID Sensors for Traceability in Smart Cities

    Get PDF
    Information Technology and Communications (ICT) is presented as the main element in order to achieve more efficient and sustainable city resource management, while making sure that the needs of the citizens to improve their quality of life are satisfied. A key element will be the creation of new systems that allow the acquisition of context information, automatically and transparently, in order to provide it to decision support systems. In this paper, we present a novel distributed system for obtaining, representing and providing the flow and movement of people in densely populated geographical areas. In order to accomplish these tasks, we propose the design of a smart sensor network based on RFID communication technologies, reliability patterns and integration techniques. Contrary to other proposals, this system represents a comprehensive solution that permits the acquisition of user information in a transparent and reliable way in a non-controlled and heterogeneous environment. This knowledge will be useful in moving towards the design of smart cities in which decision support on transport strategies, business evaluation or initiatives in the tourism sector will be supported by real relevant information. As a final result, a case study will be presented which will allow the validation of the proposal

    Traceability Links Recovery among Requirements and BPMN models

    Full text link
    Tesis por compendio[EN] Throughout the pages of this document, I present the results of the research that was carried out in the context of my PhD studies. During the aforementioned research, I studied the process of Traceability Links Recovery between natural language requirements and industrial software models. More precisely, due to their popularity and extensive usage, I studied the process of Traceability Links Recovery between natural language requirements and Business Process Models, also known as BPMN models. In order to carry out the research, I focused my work on two main objectives: (1) the development of the Traceability Links Recovery techniques between natural language requirements and BPMN models, and (2) the validation and analysis of the results obtained by the developed techniques in industrial domain case studies. The results of the research have been redacted and published in forums, conferences, and journals specialized in the topics and context of the research. This thesis document introduces the topics, context, and objectives of the research, presents the academic publications that have been published as a result of the work, and then discusses the outcomes of the investigation.[ES] A través de las páginas de este documento, presento los resultados de la investigación realizada en el contexto de mis estudios de doctorado. Durante la investigación, he estudiado el proceso de Recuperación de Enlaces de Trazabilidad entre requisitos especificados en lenguaje natural y modelos de software industriales. Más concretamente, debido a su popularidad y uso extensivo, he estudiado el proceso de Recuperación de Enlaces de Trazabilidad entre requisitos especificados en lenguaje natural y Modelos de Procesos de Negocio, también conocidos como modelos BPMN. Para llevar a cabo esta investigación, mi trabajo se ha centrado en dos objetivos principales: (1) desarrollo de técnicas de Recuperación de Enlaces de Trazabilidad entre requisitos especificados en lenguaje natural y modelos BPMN, y (2) validación y análisis de los resultados obtenidos por las técnicas desarrolladas en casos de estudio de dominios industriales. Los resultados de la investigación han sido redactados y publicados en foros, conferencias y revistas especializadas en los temas y contexto de la investigación. Esta tesis introduce los temas, contexto y objetivos de la investigación, presenta las publicaciones académicas que han sido publicadas como resultado del trabajo, y expone los resultados de la investigación.[CA] A través de les pàgines d'aquest document, presente els resultats de la investigació realitzada en el context dels meus estudis de doctorat. Durant la investigació, he estudiat el procés de Recuperació d'Enllaços de Traçabilitat entre requisits especificats en llenguatge natural i models de programari industrials. Més concretament, a causa de la seua popularitat i ús extensiu, he estudiat el procés de Recuperació d'Enllaços de Traçabilitat entre requisits especificats en llenguatge natural i Models de Processos de Negoci, també coneguts com a models BPMN. Per a dur a terme aquesta investigació, el meu treball s'ha centrat en dos objectius principals: (1) desenvolupament de tècniques de Recuperació d'Enllaços de Traçabilitat entre requisits especificats en llenguatge natural i models BPMN, i (2) validació i anàlisi dels resultats obtinguts per les tècniques desenvolupades en casos d'estudi de dominis industrials. Els resultats de la investigació han sigut redactats i publicats en fòrums, conferències i revistes especialitzades en els temes i context de la investigació. Aquesta tesi introdueix els temes, context i objectius de la investigació, presenta les publicacions acadèmiques que han sigut publicades com a resultat del treball, i exposa els resultats de la investigació.Lapeña Martí, R. (2020). Traceability Links Recovery among Requirements and BPMN models [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/149391TESISCompendi

    Enabling Traceability in an MDE Approach to Improve Performance of GPU Applications

    Get PDF
    Graphics Processor Units (GPUs) are known for offering high per- formance and power efficiency for processing algorithms that suit well to their massively parallel architecture. Unfortunately, as parallel programming for this kind of architecture requires a complex distribution of tasks and data, developers find it difficult to implement their applications effectively. Although approaches based on source-to-source and model-to-source transformations have intended to provide a low learning curve for parallel programming and take advantage of architecture features to create optimized applications, the programming re- mains difficult for neophytes. A Model Driven Engineering (MDE) approach for GPU intends to hide the low-level details of GPU programming by automati- cally generating the application from the high-level specifications. However, the application designer should take into account some adjustments in the source code to achieve better performance at runtime. Directly modifying the gen- erated source code goes against the MDE philosophy. Moreover, the designer does not necessarily have the required knowledge to effectively modify the GPU generated code. This work aims at improving performance by returning to the high-level models, specific execution data from a profiling tool enhanced by smart advices from an analysis engine. In order to keep the link between exe- cution and model, the process is based on a traceability mechanism. Once the model is automatically annotated, it can be re-factored by aiming performance on the re-generated code. Hence, this work allows us keeping coherence between model and code without forgetting to harness the power of GPUs. To illustrate and clarify key points of this approach, an experimental example taking place in a transformation chain from UML-MARTE models to OpenCL code is provided.Graphics Processor Units (GPU) sont connus pour offrir de hautes performances et d'efficacité énergétique pour les algorithmes de traitement qui conviennent bien à leur architecture massivement paralléle. Malheureusement, comme la programmation paralléle pour ce type d'architecture exige une distribution complexe des tâches et des données, les développeurs ont des difficultés à mettre en oeuvre leurs applications de manière efficace. Bien que les approches basées sur les transformations source-to-source et model-to-source ont pour but de fournir une basse courbe d'apprentissage pour la programmation paralléle et tirer parti des fonctionnalités de l'architecture pour créer des applications optimisées, la programmation demeure difficile pour les néophytes. Une approche Model Driven Engineering (MDE) pour le GPU a l'intention de cacher les détails de bas niveau de la programmation GPU en générant automatiquement l'application à partir des spécifications de haut niveau. Cependant, le concepteur de l'application devrait tenir compte de certains ajustements dans le code source pour obtenir de meilleures performances à l'exécution. Modifiant directement le code source généré ne fait pas partie de la philosophie MDE. Par ailleurs, le concepteur n'a pas forcément les connaissances requises pour modifier efficacement le code généré par le GPU. Ce travail vise à améliorer la performance en revenant aux modèles de haut niveau, les données d'exécution spécifiques à partir d'un outil de profilage améliorée par des conseils intelligents d'un moteur d'analyse. Afin de maintenir le lien entre l'exécution et le modèle, le processus est basé sur un mécanisme de traçabilité. Une fois le modèle est automatiquement annoté, il peut être repris en visant la performance sur la réutilisation du code généré. Ainsi, ce travail nous permet de garder la cohérence entre le modèle et le code sans oublier d'exploiter la puissance des GPU. Afin d'illustrer et de clarifier les points clés de cette approche, nous fournissons un exemple se déroule dans une chaîne de transformation à partir de modéles UML- MARTE au code OpenCL

    Static Fault Localization in Model Transformations

    Get PDF
    As the complexity of model transformations grows, there is an increasing need to count on methods, mechanisms, and tools for checking their correctness, i.e., the alignment between specifications and implementations. In this paper we present a light-weight and static approach for locating the faulty rules in model transformations, based on matching functions that automatically establish these alignments using the metamodel footprints, i.e., the metamodel elements used. The approach is implemented for the combination of Tracts and ATL, both residing in the Eclipse Modeling Framework, and is supported by the corresponding toolkit. An evaluation discussing the accuracy and the limitations of the approach is also provided. Furthermore, we identify the kinds of transformations which are most suitable for validation with the proposed approach and use mutation techniques to evaluate its effectiveness.Ministerio de Ciencia e Innovación TIN2011-23795Austrian Research Promotion Agency (FFG) 832160European Commission ICT Policy Support Programme 31785

    Linking Use Stage Life Cycle Inventories with Product Design Models of Usage

    Get PDF
    International audienceUsage can have a significant environmental impact on the product life cycle. A precise evaluation and understanding of this impact during design is crucial to support early and adapted feedbacks to design experts to encourage the eco-design practice. Information on usage is spread across different field of expertise, and involves explicit and tacit knowledge. This research aims at improving the link between the available usage information, and the environmental expert’s explicit knowledge on usage. A five steps method is proposed and exemplified in this paper to formalize this link. The paper finally raises the question of how to deal with conflicting usage information during design in industry

    A Fault-Based Model of Fault Localization Techniques

    Get PDF
    Every day, ordinary people depend on software working properly. We take it for granted; from banking software, to railroad switching software, to flight control software, to software that controls medical devices such as pacemakers or even gas pumps, our lives are touched by software that we expect to work. It is well known that the main technique/activity used to ensure the quality of software is testing. Often it is the only quality assurance activity undertaken, making it that much more important. In a typical experiment studying these techniques, a researcher will intentionally seed a fault (intentionally breaking the functionality of some source code) with the hopes that the automated techniques under study will be able to identify the fault\u27s location in the source code. These faults are picked arbitrarily; there is potential for bias in the selection of the faults. Previous researchers have established an ontology for understanding or expressing this bias called fault size. This research captures the fault size ontology in the form of a probabilistic model. The results of applying this model to measure fault size suggest that many faults generated through program mutation (the systematic replacement of source code operators to create faults) are very large and easily found. Secondary measures generated in the assessment of the model suggest a new static analysis method, called testability, for predicting the likelihood that code will contain a fault in the future. While software testing researchers are not statisticians, they nonetheless make extensive use of statistics in their experiments to assess fault localization techniques. Researchers often select their statistical techniques without justification. This is a very worrisome situation because it can lead to incorrect conclusions about the significance of research. This research introduces an algorithm, MeansTest, which helps automate some aspects of the selection of appropriate statistical techniques. The results of an evaluation of MeansTest suggest that MeansTest performs well relative to its peers. This research then surveys recent work in software testing using MeansTest to evaluate the significance of researchers\u27 work. The results of the survey indicate that software testing researchers are underreporting the significance of their work

    Deriving safety cases for hierarchical structure in model-based development

    No full text
    Model-based development and automated code generation are increasingly used for actual production code, in particular in mathematical and engineering domains. However, since code generators are typically not qualified, there is no guarantee that their output satisfies the system requirements, or is even safe. Here we present an approach to systematically derive safety cases that argue along the hierarchical structure in model-based development. The safety cases are constructed mechanically using a formal analysis, based on automated theorem proving, of the automatically generated code. The analysis recovers the model structure and component hierarchy from the code, providing independent assurance of both code and model. It identifies how the given system safety requirements are broken down into component requirements, and where they are ultimately established, thus establishing a hierarchy of requirements that is aligned with the hierarchical model structure. The derived safety cases reflect the results of the analysis, and provide a high-level argument that traces the requirements on the model via the inferred model structure to the code. We illustrate our approach on flight code generated from hierarchical Simulink models by Real-Time Worksho

    Introducing autonomous aerial robots in industrial manufacturing

    Get PDF
    Although ground robots have been successfully used for many years in manufacturing, the capability of aerial robots to agilely navigate in the often sparse and static upper part of factories makes them suitable for performing tasks of interest in many industrial sectors. This paper presents the design, development, and validation of a fully autonomous aerial robotic system for manufacturing industries. It includes modules for accurate pose estimation without using a Global Navigation Satellite System (GNSS), autonomous navigation, radio-based localization, and obstacle avoidance, among others, providing a fully onboard solution capable of autonomously performing complex tasks in dynamic indoor environments in which all necessary sensors, electronics, and processing are on the robot. It was developed to fulfill two use cases relevant in many industries: light object logistics and missing tool search. The presented robotic system, functionalities, and use cases have been extensively validated with Technology Readiness Level 7 (TRL-7) in the Centro Bahía de C´ adiz (CBC) Airbus D&S factory in fully working conditions.Comisión Europea 60884Horizonte 2020 (Unión Europea) 871479Plan Nacional de I+D+I DPI2017-8979-
    corecore