172,489 research outputs found

    A Product Line Systems Engineering Process for Variability Identification and Reduction

    Full text link
    Software Product Line Engineering has attracted attention in the last two decades due to its promising capabilities to reduce costs and time to market through reuse of requirements and components. In practice, developing system level product lines in a large-scale company is not an easy task as there may be thousands of variants and multiple disciplines involved. The manual reuse of legacy system models at domain engineering to build reusable system libraries and configurations of variants to derive target products can be infeasible. To tackle this challenge, a Product Line Systems Engineering process is proposed. Specifically, the process extends research in the System Orthogonal Variability Model to support hierarchical variability modeling with formal definitions; utilizes Systems Engineering concepts and legacy system models to build the hierarchy for the variability model and to identify essential relations between variants; and finally, analyzes the identified relations to reduce the number of variation points. The process, which is automated by computational algorithms, is demonstrated through an illustrative example on generalized Rolls-Royce aircraft engine control systems. To evaluate the effectiveness of the process in the reduction of variation points, it is further applied to case studies in different engineering domains at different levels of complexity. Subject to system model availability, reduction of 14% to 40% in the number of variation points are demonstrated in the case studies.Comment: 12 pages, 6 figures, 2 tables; submitted to the IEEE Systems Journal on 3rd June 201

    Recovering Architectural Variability of a Family of Product Variants

    Full text link
    A Software Product Line (SPL) aims at applying a pre-planned systematic reuse of large-grained software artifacts to increase the software productivity and reduce the development cost. The idea of SPL is to analyze the business domain of a family of products to identify the common and the variable parts between the products. However, it is common for companies to develop, in an ad-hoc manner (e.g. clone and own), a set of products that share common functionalities and differ in terms of others. Thus, many recent research contributions are proposed to re-engineer existing product variants to a SPL. Nevertheless, these contributions are mostly focused on managing the variability at the requirement level. Very few contributions address the variability at the architectural level despite its major importance. Starting from this observation, we propose, in this paper, an approach to reverse engineer the architecture of a set of product variants. Our goal is to identify the variability and dependencies among architectural-element variants at the architectural level. Our work relies on Formal Concept Analysis (FCA) to analyze the variability. To validate the proposed approach, we experimented on two families of open-source product variants; Mobile Media and Health Watcher. The results show that our approach is able to identify the architectural variability and the dependencies

    Traceability for Model Driven, Software Product Line Engineering

    Get PDF
    Traceability is an important challenge for software organizations. This is true for traditional software development and even more so in new approaches that introduce more variety of artefacts such as Model Driven development or Software Product Lines. In this paper we look at some aspect of the interaction of Traceability, Model Driven development and Software Product Line

    Learning Models over Relational Data using Sparse Tensors and Functional Dependencies

    Full text link
    Integrated solutions for analytics over relational databases are of great practical importance as they avoid the costly repeated loop data scientists have to deal with on a daily basis: select features from data residing in relational databases using feature extraction queries involving joins, projections, and aggregations; export the training dataset defined by such queries; convert this dataset into the format of an external learning tool; and train the desired model using this tool. These integrated solutions are also a fertile ground of theoretically fundamental and challenging problems at the intersection of relational and statistical data models. This article introduces a unified framework for training and evaluating a class of statistical learning models over relational databases. This class includes ridge linear regression, polynomial regression, factorization machines, and principal component analysis. We show that, by synergizing key tools from database theory such as schema information, query structure, functional dependencies, recent advances in query evaluation algorithms, and from linear algebra such as tensor and matrix operations, one can formulate relational analytics problems and design efficient (query and data) structure-aware algorithms to solve them. This theoretical development informed the design and implementation of the AC/DC system for structure-aware learning. We benchmark the performance of AC/DC against R, MADlib, libFM, and TensorFlow. For typical retail forecasting and advertisement planning applications, AC/DC can learn polynomial regression models and factorization machines with at least the same accuracy as its competitors and up to three orders of magnitude faster than its competitors whenever they do not run out of memory, exceed 24-hour timeout, or encounter internal design limitations.Comment: 61 pages, 9 figures, 2 table

    TESNA: A Tool for Detecting Coordination Problems

    Get PDF
    Detecting problems in coordination can prove to be very difficult. This is especially true in large globally distributed environments where the Software Development can quickly go out of the Project Manager’s control. In this paper we outline a methodology to analyse the socio-technical coordination structures. We also show how this can be made easier with the help of a tool called TESNA that we have developed

    Crosscutting, what is and what is not? A Formal definition based on a Crosscutting Pattern

    Get PDF
    Crosscutting is usually described in terms of scattering and tangling. However, the distinction between these concepts is vague, which could lead to ambiguous statements. Sometimes, precise definitions are required, e.g. for the formal identification of crosscutting concerns. We propose a conceptual framework for formalizing these concepts based on a crosscutting pattern that shows the mapping between elements at two levels, e.g. concerns and representations of concerns. The definitions of the concepts are formalized in terms of linear algebra, and visualized with matrices and matrix operations. In this way, crosscutting can be clearly distinguished from scattering and tangling. Using linear algebra, we demonstrate that our definition generalizes other definitions of crosscutting as described by Masuhara & Kiczales [21] and Tonella and Ceccato [28]. The framework can be applied across several refinement levels assuring traceability of crosscutting concerns. Usability of the framework is illustrated by means of applying it to several areas such as change impact analysis, identification of crosscutting at early phases of software development and in the area of model driven software development

    Assessing architectural evolution: A case study

    Get PDF
    This is the post-print version of the Article. The official published can be accessed from the link below - Copyright @ 2011 SpringerThis paper proposes to use a historical perspective on generic laws, principles, and guidelines, like Lehman’s software evolution laws and Martin’s design principles, in order to achieve a multi-faceted process and structural assessment of a system’s architectural evolution. We present a simple structural model with associated historical metrics and visualizations that could form part of an architect’s dashboard. We perform such an assessment for the Eclipse SDK, as a case study of a large, complex, and long-lived system for which sustained effective architectural evolution is paramount. The twofold aim of checking generic principles on a well-know system is, on the one hand, to see whether there are certain lessons that could be learned for best practice of architectural evolution, and on the other hand to get more insights about the applicability of such principles. We find that while the Eclipse SDK does follow several of the laws and principles, there are some deviations, and we discuss areas of architectural improvement and limitations of the assessment approach
    corecore