143 research outputs found

    Role-Modeling in Round-Trip Engineering for Megamodels

    Get PDF
    Software is becoming more and more part of our daily life and makes it easier, e.g., in the areas of communication and infrastructure. Model-driven software development forms the basis for the development of software through the use and combination of different models, which serve as central artifacts in the software development process. In this respect, model-driven software development comprises the process from requirement analysis through design to software implementation. This set of models with their relationships to each other forms a so-called megamodel. Due to the overlapping of the models, inconsistencies occur between the models, which must be removed. Therefore, round-trip engineering is a mechanism for synchronizing models and is the foundation for ensuring consistency between models. Most of the current approaches in this area, however, work with outdated batch-oriented transformation mechanisms, which no longer meet the requirements of more complex, long-living, and ever-changing software. In addition, the creation of megamodels is time-consuming and complex, and they represent unmanageable constructs for a single user. The aim of this thesis is to create a megamodel by means of easy-to-learn mechanisms and to achieve its consistency by removing redundancy on the one hand and by incrementally managing consistency relationships on the other hand. In addition, views must be created on the parts of the megamodel to extract them across internal model boundaries. To achieve these goals, the role concept of Kühn in 2014 is used in the context of model-driven software development, which was developed in the Research Training Group 'Role-based Software Infrastructures for continuous-context-sensitive Systems.' A contribution of this work is a role-based single underlying model approach, which enables the generation of views on heterogeneous models. Besides, an approach for the synchronization of different models has been developed, which enables the role-based single underlying model approach to be extended by new models. The combination of these two approaches creates a runtime-adaptive megamodel approach that can be used in model-driven software development. The resulting approaches will be evaluated based on an example from the literature, which covers all areas of the work. In addition, the model synchronization approach will be evaluated in connection with the Transformation Tool Contest Case from 2019

    Evolving feature model configurations in software product lines

    Get PDF
    The increasing complexity and cost of software-intensive systems has led developers to seek ways of reusing software components across development projects. One approach to increasing software reusability is to develop a software product-line (SPL), which is a software architecture that can be reconfigured and reused across projects. Rather than developing software from scratch for a new project, a new configuration of the SPL is produced. It is hard, however, to find a configuration of an SPL that meets an arbitrary requirement set and does not violate any configuration constraints in the SPL. Existing research has focused on techniques that produce a configuration of an SPL in a single step. Budgetary constraints or other restrictions, however, may require multi-step configuration processes. For example, an aircraft manufacturer may want to produce a series of configurations of a plane over a span of years without exceeding a yearly budget to add features. This paper provides three contributions to the study of multi-step configuration for SPLs. First, we present a formal model of multi-step SPL configuration and map this model to constraint satisfaction problems (CSPs). Second, we show how solutions to these SPL configuration problems can be automatically derived with a constraint solver by mapping them to CSPs. Moreover, we show how feature model changes can be mapped to our approach in a multi-step scenario by using feature model drift. Third, we present empirical results demonstrating that our CSP-based reasoning technique can scale to SPL models with hundreds of features and multiple configuration steps.Ministerio de Economía y Competitividad TIN2012-32273Junta de Andalucía TIC-590

    Supporting the grow-and-prune model for evolving software product lines

    Get PDF
    207 p.Software Product Lines (SPLs) aim at supporting the development of a whole family of software products through a systematic reuse of shared assets. To this end, SPL development is separated into two interrelated processes: (1) domain engineering (DE), where the scope and variability of the system is defined and reusable core-assets are developed; and (2) application engineering (AE), where products are derived by selecting core assets and resolving variability. Evolution in SPLs is considered to be more challenging than in traditional systems, as both core-assets and products need to co-evolve. The so-called grow-and-prune model has proven great flexibility to incrementally evolve an SPL by letting the products grow, and later prune the product functionalities deemed useful by refactoring and merging them back to the reusable SPL core-asset base. This Thesis aims at supporting the grow-and-prune model as for initiating and enacting the pruning. Initiating the pruning requires SPL engineers to conduct customization analysis, i.e. analyzing how products have changed the core-assets. Customization analysis aims at identifying interesting product customizations to be ported to the core-asset base. However, existing tools do not fulfill engineers needs to conduct this practice. To address this issue, this Thesis elaborates on the SPL engineers' needs when conducting customization analysis, and proposes a data-warehouse approach to help SPL engineers on the analysis. Once the interesting customizations have been identified, the pruning needs to be enacted. This means that product code needs to be ported to the core-asset realm, while products are upgraded with newer functionalities and bug-fixes available in newer core-asset releases. Herein, synchronizing both parties through sync paths is required. However, the state of-the-art tools are not tailored to SPL sync paths, and this hinders synchronizing core-assets and products. To address this issue, this Thesis proposes to leverage existing Version Control Systems (i.e. git/Github) to provide sync operations as first-class construct

    Connected Information Management

    Get PDF
    Society is currently inundated with more information than ever, making efficient management a necessity. Alas, most of current information management suffers from several levels of disconnectedness: Applications partition data into segregated islands, small notes don’t fit into traditional application categories, navigating the data is different for each kind of data; data is either available at a certain computer or only online, but rarely both. Connected information management (CoIM) is an approach to information management that avoids these ways of disconnectedness. The core idea of CoIM is to keep all information in a central repository, with generic means for organization such as tagging. The heterogeneity of data is taken into account by offering specialized editors. The central repository eliminates the islands of application-specific data and is formally grounded by a CoIM model. The foundation for structured data is an RDF repository. The RDF editing meta-model (REMM) enables form-based editing of this data, similar to database applications such as MS access. Further kinds of data are supported by extending RDF, as follows. Wiki text is stored as RDF and can both contain structured text and be combined with structured data. Files are also supported by the CoIM model and are kept externally. Notes can be quickly captured and annotated with meta-data. Generic means for organization and navigation apply to all kinds of data. Ubiquitous availability of data is ensured via two CoIM implementations, the web application HYENA/Web and the desktop application HYENA/Eclipse. All data can be synchronized between these applications. The applications were used to validate the CoIM ideas

    Supporting the grow-and-prune model for evolving software product lines

    Get PDF
    207 p.Software Product Lines (SPLs) aim at supporting the development of a whole family of software products through a systematic reuse of shared assets. To this end, SPL development is separated into two interrelated processes: (1) domain engineering (DE), where the scope and variability of the system is defined and reusable core-assets are developed; and (2) application engineering (AE), where products are derived by selecting core assets and resolving variability. Evolution in SPLs is considered to be more challenging than in traditional systems, as both core-assets and products need to co-evolve. The so-called grow-and-prune model has proven great flexibility to incrementally evolve an SPL by letting the products grow, and later prune the product functionalities deemed useful by refactoring and merging them back to the reusable SPL core-asset base. This Thesis aims at supporting the grow-and-prune model as for initiating and enacting the pruning. Initiating the pruning requires SPL engineers to conduct customization analysis, i.e. analyzing how products have changed the core-assets. Customization analysis aims at identifying interesting product customizations to be ported to the core-asset base. However, existing tools do not fulfill engineers needs to conduct this practice. To address this issue, this Thesis elaborates on the SPL engineers' needs when conducting customization analysis, and proposes a data-warehouse approach to help SPL engineers on the analysis. Once the interesting customizations have been identified, the pruning needs to be enacted. This means that product code needs to be ported to the core-asset realm, while products are upgraded with newer functionalities and bug-fixes available in newer core-asset releases. Herein, synchronizing both parties through sync paths is required. However, the state of-the-art tools are not tailored to SPL sync paths, and this hinders synchronizing core-assets and products. To address this issue, this Thesis proposes to leverage existing Version Control Systems (i.e. git/Github) to provide sync operations as first-class construct

    Configuration Analysis for Large Scale Feature Models: Towards Speculative-Based Solutions

    Get PDF
    Los sistemas de alta variabilidad son sistemas de software en los que la gestión de la variabilidad es una actividad central. Algunos ejemplos actuales de sistemas de alta variabilidad son el sistema web de gesión de contenidos Drupal, el núcleo de Linux, y las distribuciones Debian de Linux. La configuración en sistemas de alta variabilidad es la selección de opciones de configuración según sus restricciones de configuración y los requerimientos de usuario. Los modelos de características son un estándar “de facto” para modelar las funcionalidades comunes y variables de sistemas de alta variabilidad. No obstante, el elevado número de componentes y configuraciones que un modelo de características puede contener hacen que el análisis manual de estos modelos sea una tarea muy costosa y propensa a errores. Así nace el análisis automatizado de modelos de características con mecanismos y herramientas asistidas por computadora para extraer información de estos modelos. Las soluciones tradicionales de análisis automatizado de modelos de características siguen un enfoque de computación secuencial para utilizar una unidad central de procesamiento y memoria. Estas soluciones son adecuadas para trabajar con sistemas de baja escala. Sin embargo, dichas soluciones demandan altos costos de computación para trabajar con sistemas de gran escala y alta variabilidad. Aunque existan recusos informáticos para mejorar el rendimiento de soluciones de computación, todas las soluciones con un enfoque de computación secuencial necesitan ser adaptadas para el uso eficiente de estos recursos y optimizar su rendimiento computacional. Ejemplos de estos recursos son la tecnología de múltiples núcleos para computación paralela y la tecnología de red para computación distribuida. Esta tesis explora la adaptación y escalabilidad de soluciones para el analisis automatizado de modelos de características de gran escala. En primer lugar, nosotros presentamos el uso de programación especulativa para la paralelización de soluciones. Además, nosotros apreciamos un problema de configuración desde otra perspectiva, para su solución mediante la adaptación y aplicación de una solución no tradicional. Más tarde, nosotros validamos la escalabilidad y mejoras de rendimiento computacional de estas soluciones para el análisis automatizado de modelos de características de gran escala. Concretamente, las principales contribuciones de esta tesis son: • Programación especulativa para la detección de un conflicto mínimo y 1 2 preferente. Los algoritmos de detección de conflictos mínimos determinan el conjunto mínimo de restricciones en conflicto que son responsables de comportamiento defectuoso en el modelo en análisis. Nosotros proponemos una solución para, mediante programación especulativa, ejecutar en paralelo y reducir el tiempo de ejecución de operaciones de alto costo computacional que determinan el flujo de acción en la detección de conflicto mínimo y preferente en modelos de características de gran escala. • Programación especulativa para un diagnóstico mínimo y preferente. Los algoritmos de diagnóstico mínimo determinan un conjunto mínimo de restricciones que, por una adecuada adaptación de su estado, permiten conseguir un modelo consistente o libre de conflictos. Este trabajo presenta una solución para el diagnóstico mínimo y preferente en modelos de características de gran escala mediante la ejecución especulativa y paralela de operaciones de alto costo computacional que determinan el flujo de acción, y entonces disminuir el tiempo de ejecución de la solución. • Completar de forma mínima y preferente una configuración de modelo por diagnóstico. Las soluciones para completar una configuración parcial determinan un conjunto no necesariamente mínimo ni preferente de opciones para obtener una completa configuración. Esta tesis soluciona el completar de forma mínima y preferente una configuración de modelo mediante técnicas previamente usadas en contexto de diagnóstico de modelos de características. Esta tesis evalua que todas nuestras soluciones preservan los valores de salida esperados, y también presentan mejoras de rendimiento en el análisis automatizado de modelos de características con modelos de gran escala en las operaciones descrita

    A holistic approach to collaborative ontology development based on change management

    Get PDF
    This paper describes our methodological and technological approach for collaborative ontology development in inter-organizational settings. It is based on the formalization of the collaborative ontology development process by means of an explicit editorial workflow, which coordinates proposals for changes among ontology editors in a flexible manner. This approach is supported by new models, methods and strategies for ontology change management in distributed environments: we propose a new form of ontology change representation, organized in layers so as to provide as much independence as possible from the underlying ontology languages, together with methods and strategies for their manipulation, version management, capture, storage and maintenance, some of which are based on existing proposals in the state of the art. Moreover, we propose a set of change propagation strategies that allow keeping distributed copies of the same ontology synchronized. Finally, we illustrate and evaluate our approach with a test case in the fishery domain from the United Nations Food and Agriculture Organisation (FAO). The preliminary results obtained from our evaluation suggest positive indication on the practical value and usability of the work here presented

    Evolution, testing and configuration of variability intensive systems

    Get PDF
    Tesis descargada desde ResearchGateOne of the key characteristics of software is its ability to be adapted and configured to different scenarios. Recently, software variability has been studied as a first-class concept in different domains ranging from software product lines to pervasive systems. Variability is the ability of a software product to vary depending on different circumstances. Variability intensive systems are those software products where variability management is a core engineering activity. The varying parts of those systems are commonly modeled by us- ing different variability model flavors, being feature modeling one of the most common ones. Feature models were first introduced by Kang et al. back in 1990 and are a compact representation of a set of configurations in a variability intensive system. The large number of configurations that a feature model can encode makes the manual analysis of feature models an error prone and costly task. Then, computer-aided mechanisms appeared as a solution to extract useful information from feature models. This process of extracting information from feature models is known as ¿Automated Analysis of Feature models¿ that has been one of the main areas of research in the last years where more than thirty analysis operations have been proposed.Premio Extraordinario de Doctorado U

    Modellgetriebene Testfallkonstruktion durch Domänenexperten im Kontext von Systemfamilien

    Get PDF
    This work presents MTCC (Model-Driven Test Case Construction), an approach to the construction of acceptance tests by domain experts for testing system families based on feature models. MTCC is applied to the application domain of Digital Libraries. The basic hypothesis of this thesis is that the involvement of domain experts in the testing process for members of system families is possible on the basis of feature models and that such a testing approach has a positive influence on the efficiency and effectiveness of testing. Application quality benefits from the involvement of domain experts because tests specified by domain experts reflect their needs and requirements and therefore can serve as an executable specification. One prerequisite for the inclusion of domain experts is tooling that supports the specification of automated tests without formal modeling or programming skills. In MTCC, models of automated acceptance tests are constructed with a graphical editor based on models that represent the test-relevant functionality of a system under test as feature models and finite state machines. Feature models for individual testable systems are derived from domain-level systems for the system family. The use of feature models by the test reuse system of MTCC facilitates the systematic reuse of test models for the members of system families. MTCC is a Model-Driven test automation approach that aims at increasing the efficiency of test execution by automation while keeping independence from the implementation of the testee or the test harness in use. Because tests in MTCC are abstract models that represent the intent of the test independent from implementation specifics, MTCC employs a template-based code generation approach to generate executable test cases.Diese Arbeit stellt den Model-driven Test Case Construction (MTCC) Ansatz zur Konstruktion von wiederverwendbaren Akzeptanztests durch Domänenexperten im Kontext von Systemfamilien vor. Die Hypothese der vorliegenden Arbeit ist, dass die Einbeziehung von Domänenexperten in einen automatisierten Testprozess für die Systeme einer Systemfamilie auf Grundlage eines modellgetriebenen Ansatzes möglich ist und dass ein solcher Ansatz die Effektivität und Effizienz des Testens steigert. Die Qualität von Systemen profitiert von der Einbeziehung von Domänenexperten da deren Anforderungen durch die Tests wiedergegeben werden und diese somit als ausführbare Spezifikation dienen. Eine Voraussetzung für die Einbeziehung von Domänenexperten in das Testen ist eine Form der Werkzeugunterstützung die die Spezifikation von automatisierten Tests ohne Kenntnisse der Programmierung oder der formalen Modellierung gestattet. In MTCC werden abstrakte Modelle von Akzeptanztests mittels eines einfachen, graphischen Editors realisiert der seinerseits auf Modellen basiert die die testrelevant Funktionalität von Testlingen mittels Featuremodellen und endlichen Automaten darstellen. Featuremodelle repräsentieren einzelne Systeme der zu testenden Systemfamilie die von Domänenmodellen auf Ebene der der Systemfamilie abgeleitet sind. MTCC ist ein modellgetriebener Ansatz zur Automatisierung von Tests der der Steigerung der Effektivität der Testausführung dient und zugleich von den Spezifika der Implementierung des Testlings und der jeweiligen Software zur Testausführung entkoppelt. Da Tests in MTCC abstrakte Modelle sind, die die Semantik eines Tests unabhängig von Implementierungswissen ausdrücken, verwendet MTCC ein auf Templates basierendes Verfahren zur Generierung von ausführbaren Testskripten
    corecore