175,834 research outputs found

    A Runtime Verification and Validation Framework for Self-Adaptive Software

    Get PDF
    The concepts that make self-adaptive software attractive also make it more difficult for users to gain confidence that these systems will consistently meet their goals under uncertain context. To improve user confidence in self-adaptive behavior, machine-readable conceptual models have been developed to instrument the adaption behavior of the target software system and primary feedback loop. By comparing these machine-readable models to the self-adaptive system, runtime verification and validation may be introduced as another method to increase confidence in self-adaptive systems; however, the existing conceptual models do not provide the semantics needed to institute this runtime verification or validation. This research confirms that the introduction of runtime verification and validation for self-adaptive systems requires the expansion of existing conceptual models with quality of service metrics, a hierarchy of goals, and states with temporal transitions. Based on this expanded semantics, runtime verification and validation was introduced as a second-level feedback loop to improve the performance of the primary feedback loop and quantitatively measure the quality of service achieved in a state-based, self-adaptive system. A web-based purchasing application running in a cloud-based environment was the focus of experimentation. In order to meet changing customer purchasing demand, the self-adaptive system monitored external context changes and increased or decreased available application servers. The runtime verification and validation system operated as a second-level feedback loop to monitor quality of service goals based on internal context, and corrected self-adaptive behavior when goals are violated. Two competing quality of service goals were introduced to maintain customer satisfaction while minimizing cost. The research demonstrated that the addition of a second-level runtime verification and validation feedback loop did quantitatively improve self-adaptive system performance even with simple, static monitoring rules

    Self-Adaptive Performance Monitoring for Component-Based Software Systems

    Get PDF
    Effective monitoring of a software system’s runtime behavior is necessary to evaluate the compliance of performance objectives. This thesis has emerged in the context of the Kieker framework addressing application performance monitoring. The contribution includes a self-adaptive performance monitoring approach allowing for dynamic adaptation of the monitoring coverage at runtime. The monitoring data includes performance measures such as throughput and response time statistics, the utilization of system resources, as well as the inter- and intra-component control flow. Based on this data, performance anomaly scores are computed using time series analysis and clustering methods. The self-adaptive performance monitoring approach reduces the business-critical failure diagnosis time, as it saves time-consuming manual debugging activities. The approach and its underlying anomaly scores are extensively evaluated in lab experiments

    Continuous Integration of Architectural Performance Models with Parametric Dependencies – The CIPM Approach

    Get PDF
    Explicitly considering the software architecture supports efficient assessments of quality attributes. In particular, Architecture-based Performance Prediction (AbPP) supports performance assessment for future scenarios (e.g., alternative workload, design, deployment, etc.) without expensive measurements for all such alternatives. However, accurate AbPP requires an up-to-date architectural Performance Model (aPM) that is parameterized over factors impacting performance like input data characteristics. Especially in agile development, keeping such a parametric aPM consistent with software artifacts is challenging due to frequent evolutionary, adaptive and usage-related changes. The shortcoming of existing approaches is the scope of consistency maintenance since they do not address the impact of all aforementioned changes. Besides, extracting aPM by static and/or dynamic analysis after each impacting change would cause unnecessary monitoring overhead and may overwrite previous manual adjustments. In this article, we present our Continuous Integration of architectural Performance Model (CIPM) approach, which automatically updates the parametric aPM after each evolutionary, adaptive or usage change. To reduce the monitoring overhead, CIPM calibrates just the affected performance parameters (e.g., resource demand), using adaptive monitoring. Moreover, CIPM proposes a self-validation process that validates the accuracy, manages the monitoring and recalibrates the inaccurate parts. As a result, CIPM will automatically keep the aPM up-to-date throughout the development time and operation time, which enables AbPP for a proactive identification of upcoming performance problems and evaluating alternatives at low costs. CIPM is evaluated using three case studies, considering (1) the accuracy of the updated aPMs and associated AbPP and (2) the applicability of CIPM in terms of the scalability and the required monitoring overhead

    Software Runtime Monitoring with Adaptive Sampling Rate to Collect Representative Samples of Execution Traces

    Full text link
    Monitoring software systems at runtime is key for understanding workloads, debugging, and self-adaptation. It typically involves collecting and storing observable software data, which can be analyzed online or offline. Despite the usefulness of collecting system data, it may significantly impact the system execution by delaying response times and competing with system resources. The typical approach to cope with this is to filter portions of the system to be monitored and to sample data. Although these approaches are a step towards achieving a desired trade-off between the amount of collected information and the impact on the system performance, they focus on collecting data of a particular type or may capture a sample that does not correspond to the actual system behavior. In response, we propose an adaptive runtime monitoring process to dynamically adapt the sampling rate while monitoring software systems. It includes algorithms with statistical foundations to improve the representativeness of collected samples without compromising the system performance. Our evaluation targets five applications of a widely used benchmark. It shows that the error (RMSE) of the samples collected with our approach is 9-54% lower than the main alternative strategy (sampling rate inversely proportional to the throughput), with 1-6% higher performance impact.Comment: in Journal of Systems and Softwar

    Towards critical event monitoring, detection and prediction for self-adaptive future Internet applications

    No full text
    The Future Internet (FI) will be composed of a multitude of diverse types of services that offer flexible, remote access to software features, content, computing resources, and middleware solutions through different cloud delivery models, such as IaaS, PaaS and SaaS. Ultimately, this means that loosely coupled Internet services will form a comprehensive base for developing value added applications in an agile way. Unlike traditional application development, which uses computing resources and software components under local administrative control, FI applications will thus strongly depend on third-party services. To maintain their quality of service, those applications therefore need to dynamically and autonomously adapt to an unprecedented level of changes that may occur during runtime. In this paper, we present our recent experiences on monitoring, detection, and prediction of critical events for both software services and multimedia applications. Based on these findings we introduce potential directions for future research on self-adaptive FI applications, bringing together those research directions

    Self-Adaptive Decentralized Monitoring in Software-Defined Networks

    Get PDF
    The Software-Defined Networking (SDN) paradigm can allow network management solutions to automatically and frequently reconfigure network resources. When developing SDNbased management architectures, it is of paramount importance to design a monitoring system that can provide timely and consistent updates to heterogeneous management applications. To support such applications operating with low latency requirements, the monitoring system should scale with increasing network size and provide precise network views with minimum overhead on the available resources. In this paper we present a novel, self-adaptive, decentralized framework for resource monitoring in SDN. Our framework enables accurate statistics to be collected with limited burden on the network resources. This is realized through a self-tuning, adaptive monitoring mechanism that automatically adjusts its settings based on the traffic dynamics. We evaluate our proposal based on a realistic use case scenario, where a content distribution service and an on-demand gaming platform are deployed within an ISP network. The results show that reduced monitoring latencies are obtained with the proposed framework, thus enabling shorter reconfiguration control loops. In addition, the proposed adaptive monitoring method achieves significant gain in terms of monitoring overhead, while preserving the performance of the services considered

    ACon: A learning-based approach to deal with uncertainty in contextual requirements at runtime

    Get PDF
    Context: Runtime uncertainty such as unpredictable operational environment and failure of sensors that gather environmental data is a well-known challenge for adaptive systems. Objective: To execute requirements that depend on context correctly, the system needs up-to-date knowledge about the context relevant to such requirements. Techniques to cope with uncertainty in contextual requirements are currently underrepresented. In this paper we present ACon (Adaptation of Contextual requirements), a data-mining approach to deal with runtime uncertainty affecting contextual requirements. Method: ACon uses feedback loops to maintain up-to-date knowledge about contextual requirements based on current context information in which contextual requirements are valid at runtime. Upon detecting that contextual requirements are affected by runtime uncertainty, ACon analyses and mines contextual data, to (re-)operationalize context and therefore update the information about contextual requirements. Results: We evaluate ACon in an empirical study of an activity scheduling system used by a crew of 4 rowers in a wild and unpredictable environment using a complex monitoring infrastructure. Our study focused on evaluating the data mining part of ACon and analysed the sensor data collected onboard from 46 sensors and 90,748 measurements per sensor. Conclusion: ACon is an important step in dealing with uncertainty affecting contextual requirements at runtime while considering end-user interaction. ACon supports systems in analysing the environment to adapt contextual requirements and complements existing requirements monitoring approaches by keeping the requirements monitoring specification up-to-date. Consequently, it avoids manual analysis that is usually costly in today’s complex system environments.Peer ReviewedPostprint (author's final draft
    • …
    corecore