1,171 research outputs found

    Debian Packages Repositories as Software Product Line Models. Towards Automated Analysis

    Get PDF
    The automated analysis of variability models in general and feature models in particular is a thriving research topic. There have been numerous contributions along the last twenty years in this area including both, research papers and tools. However, the lack of realistic variability models to evaluate those techniques and tools is recognized as a major problem by the community. To address this issue, we looked for large– scale variability models in the open source community. We found that the Debian package dependency language can be interpreted as software product line variability model. Moreover, we found that those models can be automatically analysed in a software product line variability model-like style. In this paper, we take a first step towards the automated analysis of Debian package dependency language. We provide a mapping from these models to propositional formulas. We also show how this could allow us to perform analysis operations on the repositories like the detection of anomalies (e.g. packages that cannot be installed).CICYT TIN2009- 07366Junta de Andalucía TIC-253

    Configuration Analysis for Large Scale Feature Models: Towards Speculative-Based Solutions

    Get PDF
    Los sistemas de alta variabilidad son sistemas de software en los que la gestión de la variabilidad es una actividad central. Algunos ejemplos actuales de sistemas de alta variabilidad son el sistema web de gesión de contenidos Drupal, el núcleo de Linux, y las distribuciones Debian de Linux. La configuración en sistemas de alta variabilidad es la selección de opciones de configuración según sus restricciones de configuración y los requerimientos de usuario. Los modelos de características son un estándar “de facto” para modelar las funcionalidades comunes y variables de sistemas de alta variabilidad. No obstante, el elevado número de componentes y configuraciones que un modelo de características puede contener hacen que el análisis manual de estos modelos sea una tarea muy costosa y propensa a errores. Así nace el análisis automatizado de modelos de características con mecanismos y herramientas asistidas por computadora para extraer información de estos modelos. Las soluciones tradicionales de análisis automatizado de modelos de características siguen un enfoque de computación secuencial para utilizar una unidad central de procesamiento y memoria. Estas soluciones son adecuadas para trabajar con sistemas de baja escala. Sin embargo, dichas soluciones demandan altos costos de computación para trabajar con sistemas de gran escala y alta variabilidad. Aunque existan recusos informáticos para mejorar el rendimiento de soluciones de computación, todas las soluciones con un enfoque de computación secuencial necesitan ser adaptadas para el uso eficiente de estos recursos y optimizar su rendimiento computacional. Ejemplos de estos recursos son la tecnología de múltiples núcleos para computación paralela y la tecnología de red para computación distribuida. Esta tesis explora la adaptación y escalabilidad de soluciones para el analisis automatizado de modelos de características de gran escala. En primer lugar, nosotros presentamos el uso de programación especulativa para la paralelización de soluciones. Además, nosotros apreciamos un problema de configuración desde otra perspectiva, para su solución mediante la adaptación y aplicación de una solución no tradicional. Más tarde, nosotros validamos la escalabilidad y mejoras de rendimiento computacional de estas soluciones para el análisis automatizado de modelos de características de gran escala. Concretamente, las principales contribuciones de esta tesis son: • Programación especulativa para la detección de un conflicto mínimo y 1 2 preferente. Los algoritmos de detección de conflictos mínimos determinan el conjunto mínimo de restricciones en conflicto que son responsables de comportamiento defectuoso en el modelo en análisis. Nosotros proponemos una solución para, mediante programación especulativa, ejecutar en paralelo y reducir el tiempo de ejecución de operaciones de alto costo computacional que determinan el flujo de acción en la detección de conflicto mínimo y preferente en modelos de características de gran escala. • Programación especulativa para un diagnóstico mínimo y preferente. Los algoritmos de diagnóstico mínimo determinan un conjunto mínimo de restricciones que, por una adecuada adaptación de su estado, permiten conseguir un modelo consistente o libre de conflictos. Este trabajo presenta una solución para el diagnóstico mínimo y preferente en modelos de características de gran escala mediante la ejecución especulativa y paralela de operaciones de alto costo computacional que determinan el flujo de acción, y entonces disminuir el tiempo de ejecución de la solución. • Completar de forma mínima y preferente una configuración de modelo por diagnóstico. Las soluciones para completar una configuración parcial determinan un conjunto no necesariamente mínimo ni preferente de opciones para obtener una completa configuración. Esta tesis soluciona el completar de forma mínima y preferente una configuración de modelo mediante técnicas previamente usadas en contexto de diagnóstico de modelos de características. Esta tesis evalua que todas nuestras soluciones preservan los valores de salida esperados, y también presentan mejoras de rendimiento en el análisis automatizado de modelos de características con modelos de gran escala en las operaciones descrita

    Automated analysis of feature models: Quo vadis?

    Get PDF
    Feature models have been used since the 90's to describe software product lines as a way of reusing common parts in a family of software systems. In 2010, a systematic literature review was published summarizing the advances and settling the basis of the area of Automated Analysis of Feature Models (AAFM). From then on, different studies have applied the AAFM in different domains. In this paper, we provide an overview of the evolution of this field since 2010 by performing a systematic mapping study considering 423 primary sources. We found six different variability facets where the AAFM is being applied that define the tendencies: product configuration and derivation; testing and evolution; reverse engineering; multi-model variability-analysis; variability modelling and variability-intensive systems. We also confirmed that there is a lack of industrial evidence in most of the cases. Finally, we present where and when the papers have been published and who are the authors and institutions that are contributing to the field. We observed that the maturity is proven by the increment in the number of journals published along the years as well as the diversity of conferences and workshops where papers are published. We also suggest some synergies with other areas such as cloud or mobile computing among others that can motivate further research in the future.Ministerio de Economía y Competitividad TIN2015-70560-RJunta de Andalucía TIC-186

    Automated generation of computationally hard feature models using evolutionary algorithms

    Get PDF
    This is the post-print version of the final paper published in Expert Systems with Applications. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2014 Elsevier B.V.A feature model is a compact representation of the products of a software product line. The automated extraction of information from feature models is a thriving topic involving numerous analysis operations, techniques and tools. Performance evaluations in this domain mainly rely on the use of random feature models. However, these only provide a rough idea of the behaviour of the tools with average problems and are not sufficient to reveal their real strengths and weaknesses. In this article, we propose to model the problem of finding computationally hard feature models as an optimization problem and we solve it using a novel evolutionary algorithm for optimized feature models (ETHOM). Given a tool and an analysis operation, ETHOM generates input models of a predefined size maximizing aspects such as the execution time or the memory consumption of the tool when performing the operation over the model. This allows users and developers to know the performance of tools in pessimistic cases providing a better idea of their real power and revealing performance bugs. Experiments using ETHOM on a number of analyses and tools have successfully identified models producing much longer executions times and higher memory consumption than those obtained with random models of identical or even larger size.European Commission (FEDER), the Spanish Government and the Andalusian Government

    ETHOM: An Evolutionary Algorithm for Optimized Feature Models Generation - TECHNICAL REPORT ISA-2012-TR-01 (v. 1.1)

    Get PDF
    A feature model defines the valid combinations of features in a domain. The automated extraction of information from feature models is a thriv ing topic involving numerous analysis operations, techniques and tools. The progress of this discipline is leading to an increasing concern to test and compare the performance of analysis solutions using tough input mod els that show the behaviour of the tools in extreme situations (e.g. those producing longest execution times or highest memory consumption). Cur rently, these feature models are generated randomly ignoring the internal aspects of the tools under tests. As a result, these only provide a rough idea of the behaviour of the tools with average problems and are not sufficient to reveal their real strengths and weaknesses. In this technical report, we model the problem of finding computationally– hard feature models as an optimization problem and we solve it using a novel evolutionary algorithm. Given a tool and an analysis operation, our algorithm generates input models of a predefined size maximizing aspects as the execution time or the memory consumption of the tool when per forming the operation over the model. This allows users and developers to know the behaviour of tools in pessimistic cases providing a better idea of their real power. Experiments using our evolutionary algorithm on a num ber of analysis operations and tools have successfully identified input mod els causing much longer executions times and higher memory consumption than random models of identical or even larger size. Our solution is generic and applicable to a variety of optimization problems on feature models, not only those involving analysis operations. In view of the positive results, we expect this work to be the seed for a new wave of research contributions exploiting the benefit of evolutionary programming in the field of feature modelling

    Automated metamorphic testing on the analyses of feature models

    Get PDF
    Copyright © 2010 Elsevier B.V. All rights reserved.Context: A feature model (FM) represents the valid combinations of features in a domain. The automated extraction of information from FMs is a complex task that involves numerous analysis operations, techniques and tools. Current testing methods in this context are manual and rely on the ability of the tester to decide whether the output of an analysis is correct. However, this is acknowledged to be time-consuming, error-prone and in most cases infeasible due to the combinatorial complexity of the analyses, this is known as the oracle problem.Objective: In this paper, we propose using metamorphic testing to automate the generation of test data for feature model analysis tools overcoming the oracle problem. An automated test data generator is presented and evaluated to show the feasibility of our approach.Method: We present a set of relations (so-called metamorphic relations) between input FMs and the set of products they represent. Based on these relations and given a FM and its known set of products, a set of neighbouring FMs together with their corresponding set of products are automatically generated and used for testing multiple analyses. Complex FMs representing millions of products can be efficiently created by applying this process iteratively.Results: Our evaluation results using mutation testing and real faults reveal that most faults can be automatically detected within a few seconds. Two defects were found in FaMa and another two in SPLOT, two real tools for the automated analysis of feature models. Also, we show how our generator outperforms a related manual suite for the automated analysis of feature models and how this suite can be used to guide the automated generation of test cases obtaining important gains in efficiency.Conclusion: Our results show that the application of metamorphic testing in the domain of automated analysis of feature models is efficient and effective in detecting most faults in a few seconds without the need for a human oracle.This work has been partially supported by the European Commission(FEDER)and Spanish Government under CICYT project SETI(TIN2009-07366)and the Andalusian Government project ISABEL(TIC-2533)

    ETHOM: An Evolutionary Algorithm for Optimized Feature Models Generation (v. 1.2): Technical Report ISA-2012-TR-05

    Get PDF
    A feature model defines the valid combinations of features in a domain. The automated extraction of information from feature models is a thriving topic involving numerous analysis operations, techniques and tools. The progress of this discipline is leading to an increasing concern to test and compare the performance of analysis solutions using tough input models that show the behaviour of the tools in extreme situations (e.g. those producing longest execution times or highest memory consumption). Currently, these feature models are generated randomly ignoring the internal aspects of the tools under tests. As a result, these only provide a rough idea of the behaviour of the tools with average problems and are not sufficient to reveal their real strengths and weaknesses. In this technical report, we model the problem of finding computationally– hard feature models as an optimization problem and we solve it using a novel evolutionary algorithm. Given a tool and an analysis operation, our algorithm generates input models of a predefined size maximizing aspects as the execution time or the memory consumption of the tool when performing the operation over the model. This allows users and developers to know the behaviour of tools in pessimistic cases providing a better idea of their real power. Experiments using our evolutionary algorithm on a number of analysis operations and tools have successfully identified input models causing much longer executions times and higher memory consumption than random models of identical or even larger size. Our solution is generic and applicable to a variety of optimization problems on feature models, not only those involving analysis operations. In view of the positive results, we expect this work to be the seed for a new wave of research contributions exploiting the benefit of evolutionary programming in the field of feature modelling

    User-centric Adaptation Analysis of Multi-tenant Services

    Get PDF
    Multi-tenancy is a key pillar of cloud services. It allows different users to share computing and virtual resources transparently, meanwhile guaranteeing substantial cost savings. Due to the tradeoff between scalability and customization, one of the major drawbacks of multi-tenancy is limited configurability. Since users may often have conflicting configuration preferences, offering the best user experience is an open challenge for service providers. In addition, the users, their preferences, and the operational environment may change during the service operation, thus jeopardizing the satisfaction of user preferences. In this article, we present an approach to support user-centric adaptation of multi-tenant services. We describe how to engineer the activities of the Monitoring, Analysis, Planning, Execution (MAPE) loop to support user-centric adaptation, and we focus on adaptation analysis. Our analysis computes a service configuration that optimizes user satisfaction, complies with infrastructural constraints, and minimizes reconfiguration obtrusiveness when user- or service-related changes take place. To support our analysis, we model multitenant services and user preferences by using feature and preference models, respectively. We illustrate our approach by utilizing different cases of virtual desktops. Our results demonstrate the effectiveness of the analysis in improving user preferences satisfaction in negligible time.Ministerio de Economía y Competitividad TIN2012-32273Junta de Andalucía P12--TIC--1867Junta de Andalucía TIC-590

    A modular metamodel and refactoring rules to achieve software product line interoperability.

    Get PDF
    Emergent application domains, such as cyber–physical systems, edge computing or industry 4.0. present a high variability in software and hardware infrastructures. However, no single variability modeling language supports all language extensions required by these application domains (i.e., attributes, group cardinalities, clonables, complex constraints). This limitation is an open challenge that should be tackled by the software engineering field, and specifically by the software product line (SPL) community. A possible solution could be to define a completely new language, but this has a high cost in terms of adoption time and development of new tools. A more viable alternative is the definition of refactoring and specialization rules that allow interoperability between existing variability languages. However, with this approach, these rules cannot be reused across languages because each language uses a different set of modeling concepts and a different concrete syntax. Our approach relies on a modular and extensible metamodel that defines a common abstract syntax for existing variability modeling extensions. We map existing feature modeling languages in the SPL community to our common abstract syntax. Using our abstract syntax, we define refactoring rules at the language construct level that help to achieve interoperability between variability modeling languages.Work supported by the projects MEDEA RTI2018-099213-B-I00, IRIS PID2021-122812OB-I00 (co-financed by FEDER funds), Rhea P18-FR-1081 (MCI/AEI/FEDER, UE), LEIA UMA18-FEDERIA-157, and DAEMON H2020-101017109. // Funding for open access: Universidad de Málaga / CBUA
    corecore