4,585 research outputs found

    PuLSE-I: Deriving instances from a product line infrastructure

    Get PDF
    Reusing assets during application engineering promises to improve the efficiency of systems development. However, in order to benefit from reusable assets, application engineering processes must incorporate when and how to use the reusable assets during single system development. However, when and how to use a reusable asset depends on what types of reusable assets have been created.Product line engineering approaches produce a reusable infrastructure for a set of products. In this paper, we present the application engineering process associated with the PuLSE product line software engineering method - PuLSE-I. PuLSE-I details how single systems can be built efficiently from the reusable product line infrastructure built during the other PuLSE activities

    A Systematic Review of Tracing Solutions in Software Product Lines

    Get PDF
    Software Product Lines are large-scale, multi-unit systems that enable massive, customized production. They consist of a base of reusable artifacts and points of variation that provide the system with flexibility, allowing generating customized products. However, maintaining a system with such complexity and flexibility could be error prone and time consuming. Indeed, any modification (addition, deletion or update) at the level of a product or an artifact would impact other elements. It would therefore be interesting to adopt an efficient and organized traceability solution to maintain the Software Product Line. Still, traceability is not systematically implemented. It is usually set up for specific constraints (e.g. certification requirements), but abandoned in other situations. In order to draw a picture of the actual conditions of traceability solutions in Software Product Lines context, we decided to address a literature review. This review as well as its findings is detailed in the present article.Comment: 22 pages, 9 figures, 7 table

    Evolving multi-tenant SaaS cloud applications using model-driven engineering

    Get PDF
    Cloud computing promotes multi-tenancy for efficient resource utilization by sharing hardware and software infrastructure among multiple clients. Multi-tenant applications running on a cloud infrastructure are provided to clients as Software-as-a-Service (SaaS) over the network. Despite its benefits, multi-tenancy introduces additional challenges, such as p artitioning, extensibility, and customizability during the application development. Over time, after the application deployment, new requirements of clients and changes in business environment result application evolution. As the application evolves, its complexity also increases. In multi-tenancy, evolution demanded by individual clients should not affect availability , security , and performance of the application for other clients. Thus, the multi- tenancy concerns add more complexity by causing variability in design decisions. Managing this complexity requires adequate approaches and tools. In this paper, we propose modeling techniques from software product lines (SPL) and model-driven engineering (MDE) to manage variability and support evolution of multi-tenant applications and their requirements. Specifically, SPL was ap p lied to define technological and concep tual variabilities during the application design, where MDE was suggested to manage these variabilities. We also present a process of how MDE can address evolution of multi-tenant applications using variability models

    Traceability for Model Driven, Software Product Line Engineering

    Get PDF
    Traceability is an important challenge for software organizations. This is true for traditional software development and even more so in new approaches that introduce more variety of artefacts such as Model Driven development or Software Product Lines. In this paper we look at some aspect of the interaction of Traceability, Model Driven development and Software Product Line

    A Quality Model for Actionable Analytics in Rapid Software Development

    Get PDF
    Background: Accessing relevant data on the product, process, and usage perspectives of software as well as integrating and analyzing such data is crucial for getting reliable and timely actionable insights aimed at continuously managing software quality in Rapid Software Development (RSD). In this context, several software analytics tools have been developed in recent years. However, there is a lack of explainable software analytics that software practitioners trust. Aims: We aimed at creating a quality model (called Q-Rapids quality model) for actionable analytics in RSD, implementing it, and evaluating its understandability and relevance. Method: We performed workshops at four companies in order to determine relevant metrics as well as product and process factors. We also elicited how these metrics and factors are used and interpreted by practitioners when making decisions in RSD. We specified the Q-Rapids quality model by comparing and integrating the results of the four workshops. Then we implemented the Q-Rapids tool to support the usage of the Q-Rapids quality model as well as the gathering, integration, and analysis of the required data. Afterwards we installed the Q-Rapids tool in the four companies and performed semi-structured interviews with eight product owners to evaluate the understandability and relevance of the Q-Rapids quality model. Results: The participants of the evaluation perceived the metrics as well as the product and process factors of the Q-Rapids quality model as understandable. Also, they considered the Q-Rapids quality model relevant for identifying product and process deficiencies (e.g., blocking code situations). Conclusions: By means of heterogeneous data sources, the Q-Rapids quality model enables detecting problems that take more time to find manually and adds transparency among the perspectives of system, process, and usage.Comment: This is an Author's Accepted Manuscript of a paper to be published by IEEE in the 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA) 2018. The final authenticated version will be available onlin

    Development and evaluation of Formula Editor (a tool-based approach to enhance reusability in software product line model checking) on SAFER case study

    Get PDF
    Although model checking is extensively used for verification of single software systems, currently there is insufficient support for model checking in product lines. The presence of commonalities within the different products in the product line requires that the properties and the corresponding specifications for these properties be verified for every product in the product line. Specification and management of properties for every product in a product line can incur high overhead and make the task of model checking very difficult. It is hence essential to exploit the presence of commonalities to our advantage by providing reusability in model checking of product lines. Since different products in the product line need to be checked for same or similar properties, reuse of properties specified for one product for other products within a product line will significantly reduce the overall property specification and verification time. FormulaEditor is a property specification and management tool for enhancing the reusability of model checking of software product lines. The core of the technique is a product line-oriented user interface to guide users in generating, selecting, managing, and reusing useful product line properties, and patterns of properties for model checking. The previous version of the FormulaEditor tool supports Cadence SMV models, but not the typical CMU-SMV models. This work extends the FormulaEditor tool to allow verification of models written in CMU-SMV. The advantage of providing support to another model checker is twofold: first, it enhances the tool\u27s capability to check design specifications written in different models; and second, it allows users to specify the same design in different modeling languages to detect problems

    Using a Machine Learning Approach to Implement and Evaluate Product Line Features

    Get PDF
    Bike-sharing systems are a means of smart transportation in urban environments with the benefit of a positive impact on urban mobility. In this paper we are interested in studying and modeling the behavior of features that permit the end user to access, with her/his web browser, the status of the Bike-Sharing system. In particular, we address features able to make a prediction on the system state. We propose to use a machine learning approach to analyze usage patterns and learn computational models of such features from logs of system usage. On the one hand, machine learning methodologies provide a powerful and general means to implement a wide choice of predictive features. On the other hand, trained machine learning models are provided with a measure of predictive performance that can be used as a metric to assess the cost-performance trade-off of the feature. This provides a principled way to assess the runtime behavior of different components before putting them into operation.Comment: In Proceedings WWV 2015, arXiv:1508.0338

    Artificial intelligence for throughput bottleneck analysis – State-of-the-art and future directions

    Get PDF
    Identifying, and eventually eliminating throughput bottlenecks, is a key means to increase throughput and productivity in production systems. In the real world, however, eliminating throughput bottlenecks is a challenge. This is due to the landscape of complex factory dynamics, with several hundred machines operating at any given time. Academic researchers have tried to develop tools to help identify and eliminate throughput bottlenecks. Historically, research efforts have focused on developing analytical and discrete event simulation modelling approaches to identify throughput bottlenecks in production systems. However, with the rise of industrial digitalisation and artificial intelligence (AI), academic researchers explored different ways in which AI might be used to eliminate throughput bottlenecks, based on the vast amounts of digital shop floor data. By conducting a systematic literature review, this paper aims to present state-of-the-art research efforts into the use of AI for throughput bottleneck analysis. To make the work of the academic AI solutions more accessible to practitioners, the research efforts are classified into four categories: (1) identify, (2) diagnose, (3) predict and (4) prescribe. This was inspired by real-world throughput bottleneck management practice. The categories, identify and diagnose focus on analysing historical throughput bottlenecks, whereas predict and prescribe focus on analysing future throughput bottlenecks. This paper also provides future research topics and practical recommendations which may help to further push the boundaries of the theoretical and practical use of AI in throughput bottleneck analysis
    • …
    corecore