18 research outputs found

    Using Genetic Programming to Build Self-Adaptivity into Software-Defined Networks

    Full text link
    Self-adaptation solutions need to periodically monitor, reason about, and adapt a running system. The adaptation step involves generating an adaptation strategy and applying it to the running system whenever an anomaly arises. In this article, we argue that, rather than generating individual adaptation strategies, the goal should be to adapt the control logic of the running system in such a way that the system itself would learn how to steer clear of future anomalies, without triggering self-adaptation too frequently. While the need for adaptation is never eliminated, especially noting the uncertain and evolving environment of complex systems, reducing the frequency of adaptation interventions is advantageous for various reasons, e.g., to increase performance and to make a running system more robust. We instantiate and empirically examine the above idea for software-defined networking -- a key enabling technology for modern data centres and Internet of Things applications. Using genetic programming,(GP), we propose a self-adaptation solution that continuously learns and updates the control constructs in the data-forwarding logic of a software-defined network. Our evaluation, performed using open-source synthetic and industrial data, indicates that, compared to a baseline adaptation technique that attempts to generate individual adaptations, our GP-based approach is more effective in resolving network congestion, and further, reduces the frequency of adaptation interventions over time. In addition, we show that, for networks with the same topology, reusing over larger networks the knowledge that is learned on smaller networks leads to significant improvements in the performance of our GP-based adaptation approach. Finally, we compare our approach against a standard data-forwarding algorithm from the network literature, demonstrating that our approach significantly reduces packet loss.Comment: arXiv admin note: text overlap with arXiv:2205.0435

    Predicting Software Performance with Divide-and-Learn

    Full text link
    Predicting the performance of highly configurable software systems is the foundation for performance testing and quality assurance. To that end, recent work has been relying on machine/deep learning to model software performance. However, a crucial yet unaddressed challenge is how to cater for the sparsity inherited from the configuration landscape: the influence of configuration options (features) and the distribution of data samples are highly sparse. In this paper, we propose an approach based on the concept of 'divide-and-learn', dubbed DaLDaL. The basic idea is that, to handle sample sparsity, we divide the samples from the configuration landscape into distant divisions, for each of which we build a regularized Deep Neural Network as the local model to deal with the feature sparsity. A newly given configuration would then be assigned to the right model of division for the final prediction. Experiment results from eight real-world systems and five sets of training data reveal that, compared with the state-of-the-art approaches, DaLDaL performs no worse than the best counterpart on 33 out of 40 cases (within which 26 cases are significantly better) with up to 1.94Ă—1.94\times improvement on accuracy; requires fewer samples to reach the same/better accuracy; and producing acceptable training overhead. Practically, DaLDaL also considerably improves different global models when using them as the underlying local models, which further strengthens its flexibility. To promote open science, all the data, code, and supplementary figures of this work can be accessed at our repository: https://github.com/ideas-labo/DaL.Comment: This paper has been accepted by The ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE), 202

    DATESSO: Self-Adapting Service Composition with Debt-Aware Two Levels Constraint Reasoning

    Full text link
    The rapidly changing workload of service-based systems can easily cause under-/over-utilization on the component services, which can consequently affect the overall Quality of Service (QoS), such as latency. Self-adaptive services composition rectifies this problem, but poses several challenges: (i) the effectiveness of adaptation can deteriorate due to over-optimistic assumptions on the latency and utilization constraints, at both local and global levels; and (ii) the benefits brought by each composition plan is often short term and is not often designed for long-term benefits -- a natural prerequisite for sustaining the system. To tackle these issues, we propose a two levels constraint reasoning framework for sustainable self-adaptive services composition, called DATESSO. In particular, DATESSO consists of a re ned formulation that differentiates the "strictness" for latency/utilization constraints in two levels. To strive for long-term benefits, DATESSO leverages the concept of technical debt and time-series prediction to model the utility contribution of the component services in the composition. The approach embeds a debt-aware two level constraint reasoning algorithm in DATESSO to improve the efficiency, effectiveness and sustainability of self-adaptive service composition. We evaluate DATESSO on a service-based system with real-world WS-DREAM dataset and comparing it with other state-of-the-art approaches. The results demonstrate the superiority of DATESSO over the others on the utilization, latency and running time whilst likely to be more sustainable.Comment: Accepted to the SEAMS '20. Please use the following citation: Satish Kumar, Tao Chen, Rami Bahsoon, and Rajkumar Buyya. DATESSO: Self-Adapting Service Composition with Debt-Aware Two Levels Constraint Reasoning. In IEEE/ACM 15th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, Oct 7-8, 2020, Seoul, Kore

    Learning Very Large Configuration Spaces: What Matters for Linux Kernel Sizes

    Get PDF
    Linux kernels are used in a wide variety of appliances, many of them having strong requirements on the kernel size due to constraints such as limited memory or instant boot. With more than ten thousands of configuration options to choose from, obtaining a suitable trade off between kernel size and functionality is an extremely hard problem. Developers, contributors, and users actually spend significant effort to document, understand, and eventually tune (combinations of) options for meeting a kernel size. In this paper, we investigate how machine learning can help explain what matters for predicting a given Linux kernel size. Unveiling what matters in such very large configuration space is challenging for two reasons: (1) whatever the time we spend on it, we can only build and measure a tiny fraction of possible kernel configurations; (2) the prediction model should be both accurate and interpretable. We compare different machine learning algorithms and demonstrate the benefits of specific feature encoding and selection methods to learn an accurate model that is fast to compute and simple to interpret. Our results are validated over 95,854 kernel configurations and show that we can achieve low prediction errors over a reduced set of options. We also show that we can extract interpretable information for refining documentation and experts' knowledge of Linux, or even assigning more sensible default values to options

    Microservices and Machine Learning Algorithms for Adaptive Green Buildings

    Get PDF
    In recent years, the use of services for Open Systems development has consolidated and strengthened. Advances in the Service Science and Engineering (SSE) community, promoted by the reinforcement of Web Services and Semantic Web technologies and the presence of new Cloud computing techniques, such as the proliferation of microservices solutions, have allowed software architects to experiment and develop new ways of building open and adaptable computer systems at runtime. Home automation, intelligent buildings, robotics, graphical user interfaces are some of the social atmosphere environments suitable in which to apply certain innovative trends. This paper presents a schema for the adaptation of Dynamic Computer Systems (DCS) using interdisciplinary techniques on model-driven engineering, service engineering and soft computing. The proposal manages an orchestrated microservices schema for adapting component-based software architectural systems at runtime. This schema has been developed as a three-layer adaptive transformation process that is supported on a rule-based decision-making service implemented by means of Machine Learning (ML) algorithms. The experimental development was implemented in the Solar Energy Research Center (CIESOL) applying the proposed microservices schema for adapting home architectural atmosphere systems on Green Buildings

    Partially-Observable Security Games for Automating Attack-Defense Analysis

    Full text link
    Network systems often contain vulnerabilities that remain unfixed in a network for various reasons, such as the lack of a patch or knowledge to fix them. With the presence of such residual vulnerabilities, the network administrator should properly react to the malicious activities or proactively prevent them, by applying suitable countermeasures that minimize the likelihood of an attack by the attacker. In this paper, we propose a stochastic game-theoretic approach for analyzing network security and synthesizing defense strategies to protect a network. To support analysis under partial observation, where some of the attacker's activities are unobservable or undetectable by the defender, we construct a one-sided partially observable security game and transform it into a perfect game for further analysis. We prove that this transformation is sound for a sub-class of security games and a subset of properties specified in the logic rPATL. We implement a prototype that fully automates our approach, and evaluate it by conducting experiments on a real-life network

    Fundamental Approaches to Software Engineering

    Get PDF
    This open access book constitutes the proceedings of the 24th International Conference on Fundamental Approaches to Software Engineering, FASE 2021, which took place during March 27–April 1, 2021, and was held as part of the Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg but changed to an online format due to the COVID-19 pandemic. The 16 full papers presented in this volume were carefully reviewed and selected from 52 submissions. The book also contains 4 Test-Comp contributions

    Quantitative Analyses of Software Product Lines

    Get PDF
    A software product-line (SPL) is a family of related software systems that are jointly developed and reuse a set of shared assets. Each individual software system in an SPL is called a software product and includes a set of mandatory and optional features, which are independent units of functionality. Software-analysis techniques, such as model checking, analyze a model of a software system to determine whether the software system satisfies its requirements. Because many software-analysis techniques are computationally intensive, and the number of software products in an SPL grows exponentially with the number of features in an SPL, it tends to be very time consuming to individually analyze each product of an SPL. Family-based analyses have adapted standard software-analysis techniques (e.g., model checking, type checking) to simultaneously analyze all of the software products in an SPL, reusing partial analysis results between different software products to speed up the analysis. However, these family-based analyses verify only the functional requirements of an SPL, and we are interested in analyzing the quality of service that different software products in an SPL would exhibit. Quantitative analyses of a software system model (e.g., of a weighted transition system) can estimate how long a system will take to reach its goal, how much energy a system will consume, and so on. Quantitative analyses are known to be computationally intensive. In this thesis, we investigate whether executing a family-based quantitative analysis on a model of an SPL is faster than individually analyzing every software product of the SPL. First, we present a family-based trace-checking analysis that facilitates the reconfig- uration of a dynamic software product line (DSPL), which is a type of SPL in which features can be activated or deactivated at runtime. We assessed whether executing the family-based trace-checking analysis is faster than executing the trace-checking analysis on every software product in three case studies. Our results indicated that the family-based trace checking analysis, when combined with simple data-abstraction over an SPL model’s quality-attribute values to facilitate sharing of partial-analysis results, is between 1.4 and 7.7 times faster than individually analyzing each software product. This suggests that abstraction over the quality-attribute values is key to make family-based trace-checking analysis efficient. Second, we consider an SPL’s maximum long-term average value of a quality attribute (e.g., because it represents the long-term rate of energy consumption of the system). Specifically, the maximum limit-average cost of a weighted transition represents an upper bound on the long-term average value of a quality attribute over an infinite execution of the system. Because computing the maximum limit-average cost of a software system is computationally intensive, we developed a family-based analysis that simultaneously computes the maximum limit-average cost for each software product in an SPL. We assessed its per- formance compared to individually analyzing each software product in two case studies. Our results suggest that our family-based analysis will perform best in SPLs in which many products share the same set of strongly connected components. Finally, because both of our family-based analyses require as input a timed (weighted) behaviour model of a Software Product Line, we present a method to learn such a timed (weighted) behaviour model. Specifically, the objective is to learn, for each transition t, a regression function that maps a software product to a real-valued weight that represents the duration of transition t’s execution in that software product. We apply supervised learning techniques, linear regression and regularized linear regression, to learn such functions. We assessed the accuracy of the learnt models against ground truth in two different SPL and also compared the accuracy of our method against two different state-of-the-art methods: Perfume and a Performance-Influence model. Our results indicate that the accuracy of our learnt models ranged from a mean error of 3.8% to a mean error of 193.0%. Our learnt models were most accurate for those transitions whose execution times had low variance across repeated executions of the transition in the same software product, and in which there is a linear relationship between the transition’s execution time and the presence of features in a software product
    corecore