10 research outputs found

    Software product line engineering: a practical experience

    Get PDF
    The lack of mature tool support is one of the main reasons that make the industry to be reluctant to adopt Software Product Line (SPL) approaches. A number of systematic literature reviews exist that identify the main characteristics offered by existing tools and the SPL phases in which they can be applied. However, these reviews do not really help to understand if those tools are offering what is really needed to apply SPLs to complex projects. These studies are mainly based on information extracted from the tool documentation or published papers. In this paper, we follow a different approach, in which we firstly identify those characteristics that are currently essential for the development of an SPL, and secondly analyze whether the tools provide or not support for those characteristics. We focus on those tools that satisfy certain selection criteria (e.g., they can be downloaded and are ready to be used). The paper presents a state of practice with the availability and usability of the existing tools for SPL, and defines different roadmaps that allow carrying out a complete SPL process with the existing tool support.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. Magic P12-TIC1814, HADAS TIN2015-64841-R (cofinanciado con fondos FEDER), MEDEA RTI2018-099213-B-I00 (cofinanciado con fondos FEDER), TASOVA MCIU-AEI TIN2017-90644-RED

    Uniform Sampling of SAT Solutions for Configurable Systems: Are We There Yet?

    Get PDF
    International audienceUniform or near-uniform generation of solutions for large satisfiability formulas is a problem of theoretical and practical interest for the testing community. Recent works proposed two algorithms (namely UniGen and QuickSampler) for reaching a good compromise between execution time and uniformity guarantees, with empirical evidence on SAT benchmarks. In the context of highly-configurable software systems (e.g., Linux), it is unclear whether UniGen and QuickSampler can scale and sample uniform software configurations. In this paper, we perform a thorough experiment on 128 real-world feature models. We find that UniGen is unable to produce SAT solutions out of such feature models. Furthermore, we show that QuickSampler does not generate uniform samples and that some features are either never part of the sample or too frequently present. Finally, using a case study, we characterize the impacts of these results on the ability to find bugs in a configurable system. Overall, our results suggest that we are not there: more research is needed to explore the cost-effectiveness of uniform sampling when testing large configurable systems

    Towards quality assurance of software product lines with adversarial configurations

    Get PDF
    International audienceSoftware product line (SPL) engineers put a lot of effort to ensure that, through the setting of a large number of possible configuration options, products are acceptable and well-tailored to customers’ needs. Unfortunately, options and their mutual interactions create a huge configuration space which is intractable to exhaustively explore. Instead of testing all products, machine learning is increasingly employed to approximate the set of acceptable products out of a small training sample of configurations. Machine learning (ML) techniques can refine a software product line through learned constraints and a priori prevent non-acceptable products to be derived. In this paper, we use adversarial ML techniques to generate adversarial configurations fooling ML classifiers and pinpoint incorrect classifications of products (videos) derived from an industrial video generator. Our attacks yield (up to) a 100% misclassification rate and a drop in accuracy of 5%. We discuss the implications these results have on SPL quality assurance

    Combining Multiple Granularity Variability in a Software Product Line Approach for Web Engineering

    Get PDF
    [Abstract] Context: Web engineering involves managing a high diversity of artifacts implemented in different languages and with different levels of granularity. Technological companies usually implement variable artifacts of Software Product Lines (SPLs) using annotations, being reluctant to adopt hybrid, often complex, approaches combining composition and annotations despite their benefits. Objective: This paper proposes a combined approach to support fine and coarse-grained variability for web artifacts. The proposal allows web developers to continue using annotations to handle fine-grained variability for those artifacts whose variability is very difficult to implement with a composition-based approach, but obtaining the advantages of the composition-based approach for the coarse-grained variable artifacts. Methods: A combined approach based on feature modeling that integrates annotations into a generic composition-based approach. We propose the definition of compositional and annotative variation points with custom-defined semantics, which is resolved by a scaffolding-based derivation engine. The approach is evaluated on a real-world web-based SPL by applying a set of variability metrics, as well as discussing its quality criteria in comparison with annotations, compositional, and combined existing approaches. Results: Our approach effectively handles both fine and coarse-grained variability. The mapping between the feature model and the web artifacts promotes the traceability of the features and the uniformity of the variation points regardless of the granularity of the web artifacts. Conclusions: Using well-known techniques of SPLs from an architectural point of view, such as feature modeling, can improve the design and maintenance of variable web artifacts without the need of introducing complex approaches for implementing the underlying variability.The work of the authors from the Universidad de Málaga is supported by the projects Magic P12-TIC1814 (post-doctoral research grant), MEDEA RTI2018-099213-B-I00 (co-financed by FEDER funds), Rhea P18-FR-1081 (MCI/AEI/FEDER, UE), LEIA UMA18-FEDERIA-157, TASOVA MCIU-AEI TIN2017-90644-REDT and, European Union’s H2020 research and innovation program under grant agreement DAEMON 101017109. The work of the authors from the Universidade da Coruña has been funded by MCIN/AEI/10.13039/501100011033, NextGenerationEU/PRTR, FLATCITY-POC: PDC2021-121239-C31 ; MCIN/AEI/10.13039/501100011033 EXTRACompact: PID2020-114635RB-I00 ; GAIN/Xunta de Galicia/ERDF CEDCOVID: COV20/00604 ; Xunta de Galicia/FEDER-UE GRC: ED431C 2021/53 ; MICIU/FEDER-UE BIZDEVOPSGLOBAL: RTI-2018-098309-B-C32 ; MCIN/AEI/10.13039/501100011033 MAGIST: PID2019-105221RB-C41Junta de Andalucía; P12-TIC-1814Universidad de Málaga; UMA18-FEDERIA-157Xunta de Galicia; COV20/00604Xunta de Galicia; ED431C 2021/53Junta de Andalucía; P18-FR-108

    Empirical Assessment of Generating Adversarial Configurations for Software Product Lines

    Get PDF
    International audienceSoftware product line (SPL) engineering allows the derivation of products tailored to stakeholders' needs through the setting of a large number of configuration options. Unfortunately, options and their interactions create a huge configuration space which is either intractable or too costly to explore exhaustively. Instead of covering all products, machine learning (ML) approximates the set of acceptable products (e.g., successful builds, passing tests) out of a training set (a sample of configurations). However, ML techniques can make prediction errors yielding non-acceptable products wasting time, energy and other resources. We apply adversarial machine learning techniques to the world of SPLs and craft new configurations faking to be acceptable configurations but that are not and vice-versa. It allows to diagnose prediction errors and take appropriate actions. We develop two adversarial configuration generators on top of state-of-the-art attack algorithms and capable of synthesizing configurations that are both adversarial and conform to logical constraints. We empirically assess our generators within two case studies: an industrial video synthesizer (MOTIV) and an industry-strength, open-source Web-appconfigurator (JHipster). For the two cases, our attacks yield (up to) a 100% misclassification rate without sacrificing the logical validity of adversarial configurations. This work lays the foundations of a quality assurance framework for ML-based SPLs

    Modelling, Reverse Engineering, and Learning Software Variability

    Get PDF
    The society expects software to deliver the right functionality, in a short amount of time and with fewer resources, in every possible circumstance whatever are the hardware, the operating systems, the compilers, or the data fed as input. For fitting such a diversity of needs, it is common that software comes in many variants and is highly configurable through configuration options, runtime parameters, conditional compilation directives, menu preferences, configuration files, plugins, etc. As there is no one-size-fits-all solution, software variability ("the ability of a software system or artifact to be efficiently extended, changed, customized or configured for use in a particular context") has been studied the last two decades and is a discipline of its own. Though highly desirable, software variability also introduces an enormous complexity due to the combinatorial explosion of possible variants. For example, the Linux kernel has 15000+ options and most of them can have 3 values: "yes", "no", or "module". Variability is challenging for maintaining, verifying, and configuring software systems (Web applications, Web browsers, video tools, etc.). It is also a source of opportunities to better understand a domain, create reusable artefacts, deploy performance-wise optimal systems, or find specialized solutions to many kinds of problems. In many scenarios, a model of variability is either beneficial or mandatory to explore, observe, and reason about the space of possible variants. For instance, without a variability model, it is impossible to establish a sampling strategy that would satisfy the constraints among options and meet coverage or testing criteria. I address a central question in this HDR manuscript: How to model software variability? I detail several contributions related to modelling, reverse engineering, and learning software variability. I first contribute to support the persons in charge of manually specifying feature models, the de facto standard for modeling variability. I develop an algebra together with a language for supporting the composition, decomposition, diff, refactoring, and reasoning of feature models. I further establish the syntactic and semantic relationships between feature models and product comparison matrices, a large class of tabular data. I then empirically investigate how these feature models can be used to test in the large configurable systems with different sampling strategies. Along this effort, I report on the attempts and lessons learned when defining the "right" variability language. From a reverse engineering perspective, I contribute to synthesize variability information into models and from various kinds of artefacts. I develop foundations and methods for reverse engineering feature models from satisfiability formulae, product comparison matrices, dependencies files and architectural information, and from Web configurators. I also report on the degree of automation and show that the involvement of developers and domain experts is beneficial to obtain high-quality models. Thirdly, I contribute to learning constraints and non-functional properties (performance) of a variability-intensive system. I describe a systematic process "sampling, measuring, learning" that aims to enforce or augment a variability model, capturing variability knowledge that domain experts can hardly express. I show that supervised, statistical machine learning can be used to synthesize rules or build prediction models in an accurate and interpretable way. This process can even be applied to huge configuration space, such as the Linux kernel one. Despite a wide applicability and observed benefits, I show that each individual line of contributions has limitations. I defend the following answer: a supervised, iterative process (1) based on the combination of reverse engineering, modelling, and learning techniques; (2) capable of integrating multiple variability information (eg expert knowledge, legacy artefacts, dynamic observations). Finally, this work opens different perspectives related to so-called deep software variability, security, smart build of configurations, and (threats to) science

    Fundamental Approaches to Software Engineering

    Get PDF
    This open access book constitutes the proceedings of the 23rd International Conference on Fundamental Approaches to Software Engineering, FASE 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The 23 full papers, 1 tool paper and 6 testing competition papers presented in this volume were carefully reviewed and selected from 81 submissions. The papers cover topics such as requirements engineering, software architectures, specification, software quality, validation, verification of functional and non-functional properties, model-driven development and model transformation, software processes, security and software evolution

    Performance Evaluation of Transition-based Systems with Applications to Communication Networks

    Get PDF
    Since the beginning of the twenty-first century, communication systems have witnessed a revolution in terms of their hardware capabilities. This transformation has enabled modern networks to stand up to the diversity and the scale of the requirements of the applications that they support. Compared to their predecessors that primarily consisted of a handful of homogeneous devices communicating via a single communication technology, today's networks connect myriads of systems that are intrinsically different in their functioning and purpose. In addition, many of these devices communicate via different technologies or a combination of them at a time. All these developments, coupled with the geographical disparity of the physical infrastructure, give rise to network environments that are inherently dynamic and unpredictable. To cope with heterogeneous environments and the growing demands, network units have taken a leap from the paradigm of static functioning to that of adaptivity. In this thesis, we refer to adaptive network units as transition-based systems (TBSs) and the act of adapting is termed as transition. We note that TBSs not only reside in diverse environment conditions, their need to adapt also arises following different phenomena. Such phenomena are referred to as triggers and they can occur at different time scales. We additionally observe that the nature of a transition is dictated by the specified performance objective of the relevant TBS and we seek to build an analytical framework that helps us derive a policy for performance optimization. As the state of the art lacks a unified approach to modelling the diverse functioning of the TBSs and their varied performance objectives, we first propose a general framework based on the theory of Markov Decision Processes. This framework facilitates optimal policy derivation in TBSs in a principled manner. In addition, we note the importance of bespoke analyses in specific classes of TBSs where the general formulation leads to a high-dimensional optimization problem. Specifically, we consider performance optimization in open systems employing parallelism and closed systems exploiting the benefits of service batching. In these examples, we resort to approximation techniques such as a mean-field limit for the state evolution whenever the underlying TBS deals with a large number of entities. Our formulation enables calculation of optimal policies and provides tangible alternatives to existing frameworks for Quality of Service evaluation. Compared to the state of the art, the derived policies facilitate transitions in Communication Systems that yield superior performance as shown through extensive evaluations in this thesis
    corecore