89 research outputs found

    Transfer Learning for Improving Model Predictions in Highly Configurable Software

    Full text link
    Modern software systems are built to be used in dynamic environments using configuration capabilities to adapt to changes and external uncertainties. In a self-adaptation context, we are often interested in reasoning about the performance of the systems under different configurations. Usually, we learn a black-box model based on real measurements to predict the performance of the system given a specific configuration. However, as modern systems become more complex, there are many configuration parameters that may interact and we end up learning an exponentially large configuration space. Naturally, this does not scale when relying on real measurements in the actual changing environment. We propose a different solution: Instead of taking the measurements from the real system, we learn the model using samples from other sources, such as simulators that approximate performance of the real system at low cost. We define a cost model that transform the traditional view of model learning into a multi-objective problem that not only takes into account model accuracy but also measurements effort as well. We evaluate our cost-aware transfer learning solution using real-world configurable software including (i) a robotic system, (ii) 3 different stream processing applications, and (iii) a NoSQL database system. The experimental results demonstrate that our approach can achieve (a) a high prediction accuracy, as well as (b) a high model reliability.Comment: To be published in the proceedings of the 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS'17

    Quantifying Structural Attributes of System Decompositions in 28 Feature-oriented Software Product Lines: An Exploratory Study

    Get PDF
    Background: A key idea of feature orientation is to decompose a software product line along the features it provides. Feature decomposition is orthogonal to object-oriented decomposition it crosscuts the underlying package and class structure. It has been argued often that feature decomposition improves system structure (reduced coupling, increased cohesion). However, recent empirical findings suggest that this is not necessarily the case, which is the motivation for our empirical investigation. Aim: In fact, there is little empirical evidence on how the alternative decompositions of feature orientation and object orientation compare to each other in terms of their association with observable properties of system structure (coupling, cohesion). This motivated us to empirically investigate and compare the properties of three decompositions (object-oriented, feature-oriented, and their intersection) of 28 feature-oriented software product lines. Method: In an exploratory, observational study, we quantify internal attributes, such as import coupling and cohesion, to describe and analyze the different decompositions of a feature-oriented product line in a systematic, reproducible, and comparable manner. For this purpose, we use three established software measures (CBU, IUD, EUD) as well as standard distribution statistics (e.g., Gini coefficient). Results: First, feature decomposition is associated with higher levels of structural coupling in a product line than a decomposition into classes. Second, although coupling is concentrated in feature decompositions, there are not necessarily hot-spot features. Third, the cohesion of feature modules is not necessarily higher than class cohesion, whereas feature modules serve more dependencies internally than classes. Fourth, coupling and cohesion measurement show potential for sampling optimization in complex static and dynamic product-line analyses (product-line type checking, feature-interaction detection). Conclusions: Our empirical study raises critical questions about alleged advantages of feature decomposition. At the same time, we demonstrate how the measurement of structural attributes can facilitate static and dynamic analyses of software product lines. (authors' abstract)Series: Technical Reports / Institute for Information Systems and New Medi

    Lightweight, semi-automatic variability extraction: a case study on scientific computing

    Get PDF
    In scientific computing, researchers often use feature-rich software frameworks to simulate physical, chemical, and biological processes. Commonly, researchers follow a clone-and-own approach: Copying the code of an existing, similar simulation and adapting it to the new simulation scenario. In this process, a user has to select suitable artifacts (e.g., classes) from the given framework and replaces the existing artifacts from the cloned simulation. This manual process incurs substantial effort and cost as scientific frameworks are complex and provide large numbers of artifacts. To support researchers in this area, we propose a lightweight API-based analysis approach, called VORM, that recommends appropriate artifacts as possible alternatives for replacing given artifacts. Such alternative artifacts can speed up performance of the simulation or make it amenable to other use cases, without modifying the overall structure of the simulation. We evaluate the practicality of VORM—especially, as it is very lightweight but possibly imprecise—by means of a case study on the DUNE numerics framework and two simulations from the realm of physical simulations. Specifically, we compare the recommendations by VORM with recommendations by a domain expert (a developer of DUNE). VORM recommended 34 out of the 37 artifacts proposed by the expert. In addition, it recommended 2 artifacts that are applicable but have been missed by the expert and 32 artifacts not recommended by the expert, which however are still applicable in the simulation scenario with slight modifications. Diving deeper into the results, we identified an undiscovered bug and an inconsistency in DUNE, which corroborates the usefulness of VORM

    Towards real-time data integration and analysis for embedded devices

    Get PDF
    In complex systems, e.g., in logistic hubs, cars or factories, there is a need for real time decision support. In present approaches the transfer and storage process and subsequent analysis of data in real time is not possible. One reason is that data sources can fail and thus the information flow is interrupted. Furthermore there is a large divergence of the data amount generated by data streams coming from different data sources, e.g. sensors, relational databases and mobile devices. We offer an architecture to eliminate these problems. Therefore we introduce a classification of the data sources that enables an appropriate handling of the specific data source's properties
    • …
    corecore