5,695 research outputs found

    Configuration Analysis for Large Scale Feature Models: Towards Speculative-Based Solutions

    Get PDF
    Los sistemas de alta variabilidad son sistemas de software en los que la gestión de la variabilidad es una actividad central. Algunos ejemplos actuales de sistemas de alta variabilidad son el sistema web de gesión de contenidos Drupal, el núcleo de Linux, y las distribuciones Debian de Linux. La configuración en sistemas de alta variabilidad es la selección de opciones de configuración según sus restricciones de configuración y los requerimientos de usuario. Los modelos de características son un estándar “de facto” para modelar las funcionalidades comunes y variables de sistemas de alta variabilidad. No obstante, el elevado número de componentes y configuraciones que un modelo de características puede contener hacen que el análisis manual de estos modelos sea una tarea muy costosa y propensa a errores. Así nace el análisis automatizado de modelos de características con mecanismos y herramientas asistidas por computadora para extraer información de estos modelos. Las soluciones tradicionales de análisis automatizado de modelos de características siguen un enfoque de computación secuencial para utilizar una unidad central de procesamiento y memoria. Estas soluciones son adecuadas para trabajar con sistemas de baja escala. Sin embargo, dichas soluciones demandan altos costos de computación para trabajar con sistemas de gran escala y alta variabilidad. Aunque existan recusos informáticos para mejorar el rendimiento de soluciones de computación, todas las soluciones con un enfoque de computación secuencial necesitan ser adaptadas para el uso eficiente de estos recursos y optimizar su rendimiento computacional. Ejemplos de estos recursos son la tecnología de múltiples núcleos para computación paralela y la tecnología de red para computación distribuida. Esta tesis explora la adaptación y escalabilidad de soluciones para el analisis automatizado de modelos de características de gran escala. En primer lugar, nosotros presentamos el uso de programación especulativa para la paralelización de soluciones. Además, nosotros apreciamos un problema de configuración desde otra perspectiva, para su solución mediante la adaptación y aplicación de una solución no tradicional. Más tarde, nosotros validamos la escalabilidad y mejoras de rendimiento computacional de estas soluciones para el análisis automatizado de modelos de características de gran escala. Concretamente, las principales contribuciones de esta tesis son: • Programación especulativa para la detección de un conflicto mínimo y 1 2 preferente. Los algoritmos de detección de conflictos mínimos determinan el conjunto mínimo de restricciones en conflicto que son responsables de comportamiento defectuoso en el modelo en análisis. Nosotros proponemos una solución para, mediante programación especulativa, ejecutar en paralelo y reducir el tiempo de ejecución de operaciones de alto costo computacional que determinan el flujo de acción en la detección de conflicto mínimo y preferente en modelos de características de gran escala. • Programación especulativa para un diagnóstico mínimo y preferente. Los algoritmos de diagnóstico mínimo determinan un conjunto mínimo de restricciones que, por una adecuada adaptación de su estado, permiten conseguir un modelo consistente o libre de conflictos. Este trabajo presenta una solución para el diagnóstico mínimo y preferente en modelos de características de gran escala mediante la ejecución especulativa y paralela de operaciones de alto costo computacional que determinan el flujo de acción, y entonces disminuir el tiempo de ejecución de la solución. • Completar de forma mínima y preferente una configuración de modelo por diagnóstico. Las soluciones para completar una configuración parcial determinan un conjunto no necesariamente mínimo ni preferente de opciones para obtener una completa configuración. Esta tesis soluciona el completar de forma mínima y preferente una configuración de modelo mediante técnicas previamente usadas en contexto de diagnóstico de modelos de características. Esta tesis evalua que todas nuestras soluciones preservan los valores de salida esperados, y también presentan mejoras de rendimiento en el análisis automatizado de modelos de características con modelos de gran escala en las operaciones descrita

    A feature-based classification of model repair approaches

    Get PDF
    Consistency management, the ability to detect, diagnose and handle inconsistencies, is crucial during the development process in Model-driven Engineering (MDE). As the popularity and application scenarios of MDE expanded, a variety of different techniques were proposed to address these tasks in specific contexts. Of the various stages of consistency management, this work focuses on inconsistency handling in MDE, particularly in model repair techniques. This paper proposes a feature-based classification system for model repair techniques, based on an systematic literature review of the area. We expect this work to assist developers and researchers from different disciplines in comparing their work under a unifying framework, and aid MDE practitioners in selecting suitable model repair approaches.Work financed by the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, and through the European Regional Development Fund (ERDF) through project "NORTE-01-0145-FEDER-000016"

    Training and Tuning Generative Neural Radiance Fields for Attribute-Conditional 3D-Aware Face Generation

    Full text link
    Generative Neural Radiance Fields (GNeRF) based 3D-aware GANs have demonstrated remarkable capabilities in generating high-quality images while maintaining strong 3D consistency. Notably, significant advancements have been made in the domain of face generation. However, most existing models prioritize view consistency over disentanglement, resulting in limited semantic/attribute control during generation. To address this limitation, we propose a conditional GNeRF model incorporating specific attribute labels as input to enhance the controllability and disentanglement abilities of 3D-aware generative models. Our approach builds upon a pre-trained 3D-aware face model, and we introduce a Training as Init and Optimizing for Tuning (TRIOT) method to train a conditional normalized flow module to enable the facial attribute editing, then optimize the latent vector to improve attribute-editing precision further. Our extensive experiments demonstrate that our model produces high-quality edits with superior view consistency while preserving non-target regions. Code is available at https://github.com/zhangqianhui/TT-GNeRF.Comment: 13 page

    Improving Consistency of UML Diagrams and Its Implementation Using Reverse Engineering Approach

    Get PDF
    Software development deals with various changes and evolution that cannot be avoided due to the development processes which are vastly incremental and iterative. In Model Driven Engineering, inconsistency between model and its implementation has huge impact on the software development process in terms of added cost, time and effort. The later the inconsistencies are found, it could add more cost to the software project. Thus, this paper aims to describe the development of a tool that could improve the consistency between Unified Modeling Language (UML) design models and its C# implementation using reverse engineering approach. A list of consistency rules is defined to check vertical and horizontal consistencies between structural (class diagram) and behavioral (use case diagram and sequence diagram) UML diagrams against the implemented C# source code. The inconsistencies found between UML diagrams and source code are presented in a textual description and visualized in a tree view structure

    A Focus+Context Approach to Alleviate Cognitive Challenges of Editing and Debugging UML Models

    Get PDF
    Copyright (c) 2019 IEEE Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Model-Driven Engineering has been proposed to increase the productivity of developing a software system. Despite its benefits, it has not been fully adopted in the software industry. Research has shown that modelling tools are amongst the top barriers for the adoption of MDE by industry. Recently, researchers have conducted empirical studies to identify the most severe cognitive difficulties of modellers when using UML model editors. Their analyses show that users’ prominent challenges are in remembering the contextual information when performing a particular modelling task; and locating, understanding, and fixing errors in the models. To alleviate these difficulties, we propose two Focus+Context user interfaces that provide enhanced cognitive support and automation in the user’s interaction with a model editor. Moreover, we conducted two empirical studies to assess the effectiveness of our interfaces on human users. Our results reveal that our interfaces help users 1) improve their ability to successfully fulfil their tasks, 2) avoid unnecessary switches among diagrams, 3) produce more error-free models, 4) remember contextual information, and 5) reduce time on tasks.NSERC CREATE 465463-2015 NSERC Discovery Grant 15524

    No-Reference Light Field Image Quality Assessment Based on Micro-Lens Image

    Full text link
    Light field image quality assessment (LF-IQA) plays a significant role due to its guidance to Light Field (LF) contents acquisition, processing and application. The LF can be represented as 4-D signal, and its quality depends on both angular consistency and spatial quality. However, few existing LF-IQA methods concentrate on effects caused by angular inconsistency. Especially, no-reference methods lack effective utilization of 2-D angular information. In this paper, we focus on measuring the 2-D angular consistency for LF-IQA. The Micro-Lens Image (MLI) refers to the angular domain of the LF image, which can simultaneously record the angular information in both horizontal and vertical directions. Since the MLI contains 2-D angular information, we propose a No-Reference Light Field image Quality assessment model based on MLI (LF-QMLI). Specifically, we first utilize Global Entropy Distribution (GED) and Uniform Local Binary Pattern descriptor (ULBP) to extract features from the MLI, and then pool them together to measure angular consistency. In addition, the information entropy of Sub-Aperture Image (SAI) is adopted to measure spatial quality. Extensive experimental results show that LF-QMLI achieves the state-of-the-art performance
    corecore