460 research outputs found

    Automated analysis of feature models: Quo vadis?

    Get PDF
    Feature models have been used since the 90's to describe software product lines as a way of reusing common parts in a family of software systems. In 2010, a systematic literature review was published summarizing the advances and settling the basis of the area of Automated Analysis of Feature Models (AAFM). From then on, different studies have applied the AAFM in different domains. In this paper, we provide an overview of the evolution of this field since 2010 by performing a systematic mapping study considering 423 primary sources. We found six different variability facets where the AAFM is being applied that define the tendencies: product configuration and derivation; testing and evolution; reverse engineering; multi-model variability-analysis; variability modelling and variability-intensive systems. We also confirmed that there is a lack of industrial evidence in most of the cases. Finally, we present where and when the papers have been published and who are the authors and institutions that are contributing to the field. We observed that the maturity is proven by the increment in the number of journals published along the years as well as the diversity of conferences and workshops where papers are published. We also suggest some synergies with other areas such as cloud or mobile computing among others that can motivate further research in the future.Ministerio de Economía y Competitividad TIN2015-70560-RJunta de Andalucía TIC-186

    Software diversity: state of the art and perspectives

    Get PDF
    International audienceDiversity is prevalent in modern software systems to facilitate adapting the software to customer requirements or the execution environment. Diversity has an impact on all phases of the software development process. Appropriate means and organizational structures are required to deal with the additional complexity introduced by software variability. This introductory article to the special section "Software Diversity--Modeling, Analysis and Evolution" provides an overview of the current state of the art in diverse systems development and discusses challenges and potential solutions. The article covers requirements analysis, design, implementation, verification and validation, maintenance and evolution as well as organizational aspects. It also provides an overview of the articles which are part of this special section and addresses particular issues of diverse systems development

    Incremental Reconfiguration of Product Specific Use Case Models for Evolving Configuration Decisions

    Get PDF
    [Context and motivation] Product Line Engineering (PLE) is increasingly common practice in industry to develop complex systems for multiple customers with varying needs. In many business contexts, use cases are central development artifacts for requirements engineering and system testing. In such contexts, use case configurators can play a significant role to capture variable and common requirements in Product Line (PL) use case models and to generate Product Specific (PS) use case models for each new customer in a product family. [Question/Problem] Although considerable research has been devoted to use case configurators, little attention has been paid to supporting the incremental reconfiguration of use case models with evolving configuration decisions. [Principal ideas/results] We propose, apply, and assess an incremental reconfiguration approach to support evolving configuration decisions in PL use case models. PS use case models are incrementally reconfigured by focusing only on the changed decisions and their side effects. In our prior work, we proposed and applied Product line Use case modeling Method (PUM) to support variability modeling in PL use case diagrams and specifications. We also developed a use case configurator, PUMConf, which interactively collects configuration decisions from analysts to generate PS use case models from PL models. Our approach is built on top of PUM and PUMConf. [Contributions] We provide fully automated tool support for incremental configuration as an extension of PUMConf. Our approach has been evaluated in an industrial case study in the automotive domain, which provided evidence it is practical and beneficial

    Consistent View-Based Management of Variability in Space and Time

    Get PDF
    Systeme entwickeln sich schnell weiter und existieren in verschiedenen Variationen, um unterschiedliche und sich ändernde Anforderungen erfüllen zu können. Das führt zu aufeinanderfolgenden Revisionen (Variabilität in Zeit) und zeitgleich existierenden Produktvarianten (Variabilität in Raum). Redundanzen und Abhängigkeiten zwischen unterschiedlichen Produkten über mehrere Revisionen hinweg sowie heterogene Typen von Artefakten führen schnell zu Inkonsistenzen während der Evolution eines variablen Systems. Die Bewältigung der Komplexität sowie eine einheitliche und konsistente Verwaltung beider Variabilitätsdimensionen sind wesentliche Herausforderungen, um große und langlebige Systeme erfolgreich entwickeln zu können. Variabilität in Raum wird primär in der Softwareproduktlinienentwicklung betrachtet, während Variabilität in Zeit im Softwarekonfigurationsmanagement untersucht wird. Konsistenzerhaltung zwischen heterogenen Artefakttypen und sichtbasierte Softwareentwicklung sind zentrale Forschungsthemen in modellgetriebener Softwareentwicklung. Die Isolation der drei angrenzenden Disziplinen hat zu einer Vielzahl von Ansätzen und Werkzeugen aus den unterschiedlichen Bereichen geführt, was die Definition eines gemeinsamen Verständnisses erschwert und die Gefahr redundanter Forschung und Entwicklung birgt. Werkzeuge aus den verschiedenen Disziplinen sind oftmals nicht ausreichend integriert und führen zu einer heterogenen Werkzeuglandschaft sowie hohem manuellen Aufwand während der Evolution eines variablen Systems, was wiederum der Systemqualität schadet und zu höheren Wartungskosten führt. Basierend auf dem aktuellen Stand der Forschung in den genannten Disziplinen werden in dieser Dissertation drei Kernbeiträge vorgestellt, um den Umgang mit der Komplexität während der Evolution variabler Systeme zu unterstützten. Das unifizierte konzeptionelle Modell dokumentiert und unifiziert Konzepte und Relationen für den gleichzeitigen Umgang mit Variabilität in Raum und Zeit basierend auf einer Vielzahl ausgewählter Ansätze und Werkzeuge aus der Softwareproduktlinienentwicklung und dem Softwarekonfigurationsmanagement. Über die bloße Kombination vorhandener Konzepte hinaus beschreibt das unifizierte konzeptionelle Modell neue Möglichkeiten, beide Variabilitätsdimensionen zueinander in Beziehung zu setzen. Die unifizierten Operationen verwenden das unifizierte konzeptionelle Modell als Datenstruktur und stellen die Basis für operative Verwaltung von Variabilität in Raum und Zeit dar. Die unifizierten Operationen werden basierend auf einer Analyse diverser Ansätze konzipiert, welche verschiedene Modalitäten und Paradigmen verfolgen. Während die unifizierten Operationen die Funktionalität von analysierten Werkzeugen abdecken, ermöglichen sie den gleichzeitigen Umgang mit beiden Variabilitätsdimensionen. Der unifizierte Ansatz basiert auf den vorhergehenden Beiträgen und erweitert diese um Konsistenzerhaltung. Zu diesem Zweck wurden Typen von variabilitätsspezifischen Inkonsistenzen identifiziert, die während der Evolution variabler heterogener Systeme auftreten können. Der unifizierte Ansatz ermöglicht automatisierte Konsistenzerhaltung für eine ausgewählte Teilmenge der identifizierten Inkonsistenztypen. Jeder Kernbeitrag wurde empirisch evaluiert. Zur Evaluierung des unifizierten konzeptionellen Modells und der unifizierten Operationen wurden Expertenbefragungen durchgeführt, Metriken zur Bewertung der Angemessenheit einer Unifizierung definiert und angewendet, sowie beispielhafte Anwendungen demonstriert. Die funktionale Eignung des unifizierten Ansatzes wurde mittels zweier Realweltfallstudien evaluiert: Die häufig verwendete ArgoUML-SPL, die auf ArgoUML basiert, einem UML-Modellierungswerkzeug, sowie MobileMedia, eine mobile Applikation für Medienverwaltung. Der unifizierte Ansatz ist mit dem Eclipse Modeling Framework (EMF) und dem Vitruvius Ansatz implementiert. Die Kernbeiträge dieser Arbeit erweitern das vorhandene Wissen hinsichtlich der uniformen Verwaltung von Variabilität in Raum und Zeit und verbinden diese mit automatisierter Konsistenzerhaltung für variable Systeme bestehend aus heterogenen Artefakttypen

    Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World

    Get PDF
    This report documents the program and the outcomes of GI-Dagstuhl Seminar 16394 "Software Performance Engineering in the DevOps World". The seminar addressed the problem of performance-aware DevOps. Both, DevOps and performance engineering have been growing trends over the past one to two years, in no small part due to the rise in importance of identifying performance anomalies in the operations (Ops) of cloud and big data systems and feeding these back to the development (Dev). However, so far, the research community has treated software engineering, performance engineering, and cloud computing mostly as individual research areas. We aimed to identify cross-community collaboration, and to set the path for long-lasting collaborations towards performance-aware DevOps. The main goal of the seminar was to bring together young researchers (PhD students in a later stage of their PhD, as well as PostDocs or Junior Professors) in the areas of (i) software engineering, (ii) performance engineering, and (iii) cloud computing and big data to present their current research projects, to exchange experience and expertise, to discuss research challenges, and to develop ideas for future collaborations

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache

    Consistent View-Based Management of Variability in Space and Time

    Get PDF
    Developing variable systems faces many challenges. Dependencies between interrelated artifacts within a product variant, such as code or diagrams, across product variants and across their revisions quickly lead to inconsistencies during evolution. This work provides a unification of common concepts and operations for variability management, identifies variability-related inconsistencies and presents an approach for view-based consistency preservation of variable systems

    Consistent View-Based Management of Variability in Space and Time

    Get PDF
    Developing variable systems faces many challenges. Dependencies between interrelated artifacts within a product variant, such as code or diagrams, across product variants and across their revisions quickly lead to inconsistencies during evolution. This work provides a unification of common concepts and operations for variability management, identifies variability-related inconsistencies and presents an approach for view-based consistency preservation of variable systems

    Modellbasiertes Regressionstesten von Varianten und Variantenversionen

    Get PDF
    The quality assurance of software product lines (SPL) achieved via testing is a crucial and challenging activity of SPL engineering. In general, the application of single-software testing techniques for SPL testing is not practical as it leads to the individual testing of a potentially vast number of variants. Testing each variant in isolation further results in redundant testing processes by means of redundant test-case executions due to the shared commonality. Existing techniques for SPL testing cope with those challenges, e.g., by identifying samples of variants to be tested. However, each variant is still tested separately without taking the explicit knowledge about the shared commonality and variability into account to reduce the overall testing effort. Furthermore, due to the increasing longevity of software systems, their development has to face software evolution. Hence, quality assurance has also to be ensured after SPL evolution by testing respective versions of variants. In this thesis, we tackle the challenges of testing redundancy as well as evolution by proposing a framework for model-based regression testing of evolving SPLs. The framework facilitates efficient incremental testing of variants and versions of variants by exploiting the commonality and reuse potential of test artifacts and test results. Our contribution is divided into three parts. First, we propose a test-modeling formalism capturing the variability and version information of evolving SPLs in an integrated fashion. The formalism builds the basis for automatic derivation of reusable test cases and for the application of change impact analysis to guide retest test selection. Second, we introduce two techniques for incremental change impact analysis to identify (1) changing execution dependencies to be retested between subsequently tested variants and versions of variants, and (2) the impact of an evolution step to the variant set in terms of modified, new and unchanged versions of variants. Third, we define a coverage-driven retest test selection based on a new retest coverage criterion that incorporates the results of the change impact analysis. The retest test selection facilitates the reduction of redundantly executed test cases during incremental testing of variants and versions of variants. The framework is prototypically implemented and evaluated by means of three evolving SPLs showing that it achieves a reduction of the overall effort for testing evolving SPLs.Testen ist ein wichtiger Bestandteil der Entwicklung von Softwareproduktlinien (SPL). Aufgrund der potentiell sehr großen Anzahl an Varianten einer SPL ist deren individueller Test im Allgemeinen nicht praktikabel und resultiert zudem in redundanten Testfallausführungen, die durch die Gemeinsamkeiten zwischen Varianten entstehen. Existierende SPL-Testansätze adressieren diese Herausforderungen z.B. durch die Reduktion der Anzahl an zu testenden Varianten. Jedoch wird weiterhin jede Variante unabhängig getestet, ohne dabei das Wissen über Gemeinsamkeiten und Variabilität auszunutzen, um den Testaufwand zu reduzieren. Des Weiteren muss sich die SPL-Entwicklung mit der Evolution von Software auseinandersetzen. Dies birgt weitere Herausforderungen für das SPL-Testen, da nicht nur für Varianten sondern auch für ihre Versionen die Qualität sichergestellt werden muss. In dieser Arbeit stellen wir ein Framework für das modellbasierte Regressionstesten von evolvierenden SPL vor, das die Herausforderungen des redundanten Testens und der Software-Evolution adressiert. Das Framework vereint Testmodellierung, Änderungsauswirkungsanalyse und automatische Testfallselektion, um einen inkrementellen Testprozess zu definieren, der Varianten und Variantenversionen unter Ausnutzung des Wissens über gemeinsame Funktionalität und dem Wiederverwendungspotential von Testartefakten und -resultaten effizient testet. Für die Testmodellierung entwickeln wir einen Ansatz, der Variabilitäts- sowie Versionsinformation von evolvierenden SPL gleichermaßen für die Modellierung einbezieht. Für die Änderungsauswirkungsanalyse definieren wir zwei Techniken, um zum einen Änderungen in Ausführungsabhängigkeiten zwischen zu testenden Varianten und ihren Versionen zu identifizieren und zum anderen die Auswirkungen eines Evolutionsschrittes auf die Variantenmenge zu bestimmen und zu klassifizieren. Für die Testfallselektion schlagen wir ein Abdeckungskriterium vor, das die Resultate der Auswirkungsanalyse einbezieht, um automatisierte Entscheidungen über einen Wiederholungstest von wiederverwendbaren Testfällen durchzuführen. Die abdeckungsgetriebene Testfallselektion ermöglicht somit die Reduktion der redundanten Testfallausführungen während des inkrementellen Testens von Varianten und Variantenversionen. Das Framework ist prototypisch implementiert und anhand von drei evolvierenden SPL evaluiert. Die Resultate zeigen, dass eine Aufwandsreduktion für das Testen evolvierender SPL erreicht wird

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research
    corecore