14 research outputs found

    Automated extraction of architecture-level performance models of distributed component-based systems

    Full text link
    Abstract—Modern enterprise applications have to satisfy in-creasingly stringent Quality-of-Service requirements. To ensure that a system meets its performance requirements, the ability to predict its performance under different configurations and workloads is essential. Architecture-level performance models describe performance-relevant aspects of software architectures and execution environments allowing to evaluate different usage profiles as well as system deployment and configuration options. However, building performance models manually requires a lot of time and effort. In this paper, we present a novel automated method for the extraction of architecture-level performance models of distributed component-based systems, based on mon-itoring data collected at run-time. The method is validated in a case study with the industry-standard SPECjEnterprise2010 Enterprise Java benchmark, a representative software system executed in a realistic environment. The obtained performance predictions match the measurements on the real system within an error margin of mostly 10-20 percent. I

    Autonomic Performance-Aware Resource Management in Dynamic IT Service Infrastructures

    Get PDF
    Model-based techniques are a powerful approach to engineering autonomic and self-adaptive systems. This thesis presents a model-based approach for proactive and autonomic performance-aware resource management in dynamic IT infrastructures. Core of the approach is an architecture-level modeling language to describe performance and resource management related aspects in such environments. With this approach, it is possible to autonomically find suitable system configurations at the model level

    Architecture-Level Software Performance Models for Online Performance Prediction

    Get PDF
    Proactive performance and resource management of modern IT infrastructures requires the ability to predict at run-time, how the performance of running services would be affected if the workload or the system changes. In this thesis, modeling and prediction facilities that enable online performance prediction during system operation are presented. Analyses about the impact of reconfigurations and workload trends can be conducted on the model level, without executing expensive performance tests

    WESSBAS: extraction of probabilistic workload specifications for load testing and performance prediction—a model-driven approach for session-based application systems

    Get PDF
    The specification of workloads is required in order to evaluate performance characteristics of application systems using load testing and model-based performance prediction. Defining workload specifications that represent the real workload as accurately as possible is one of the biggest challenges in both areas. To overcome this challenge, this paper presents an approach that aims to automate the extraction and transformation of workload specifications for load testing and model-based performance prediction of session-based application systems. The approach (WESSBAS) comprises three main components. First, a system- and tool-agnostic domain-specific language (DSL) allows the layered modeling of workload specifications of session-based systems. Second, instances of this DSL are automatically extracted from recorded session logs of production systems. Third, these instances are transformed into executable workload specifications of load generation tools and model-based performance evaluation tools. We present transformations to the common load testing tool Apache JMeter and to the Palladio Component Model. Our approach is evaluated using the industry-standard benchmark SPECjEnterprise2010 and the World Cup 1998 access logs. Workload-specific characteristics (e.g., session lengths and arrival rates) and performance characteristics (e.g., response times and CPU utilizations) show that the extracted workloads match the measured workloads with high accuracy

    Performance-oriented Cloud Provisioning: Taxonomy and Survey

    Full text link
    Cloud computing is being viewed as the technology of today and the future. Through this paradigm, the customers gain access to shared computing resources located in remote data centers that are hosted by cloud providers (CP). This technology allows for provisioning of various resources such as virtual machines (VM), physical machines, processors, memory, network, storage and software as per the needs of customers. Application providers (AP), who are customers of the CP, deploy applications on the cloud infrastructure and then these applications are used by the end-users. To meet the fluctuating application workload demands, dynamic provisioning is essential and this article provides a detailed literature survey of dynamic provisioning within cloud systems with focus on application performance. The well-known types of provisioning and the associated problems are clearly and pictorially explained and the provisioning terminology is clarified. A very detailed and general cloud provisioning classification is presented, which views provisioning from different perspectives, aiding in understanding the process inside-out. Cloud dynamic provisioning is explained by considering resources, stakeholders, techniques, technologies, algorithms, problems, goals and more.Comment: 14 pages, 3 figures, 3 table

    Self-Adaptive Performance Monitoring for Component-Based Software Systems

    Get PDF
    Effective monitoring of a software system’s runtime behavior is necessary to evaluate the compliance of performance objectives. This thesis has emerged in the context of the Kieker framework addressing application performance monitoring. The contribution includes a self-adaptive performance monitoring approach allowing for dynamic adaptation of the monitoring coverage at runtime. The monitoring data includes performance measures such as throughput and response time statistics, the utilization of system resources, as well as the inter- and intra-component control flow. Based on this data, performance anomaly scores are computed using time series analysis and clustering methods. The self-adaptive performance monitoring approach reduces the business-critical failure diagnosis time, as it saves time-consuming manual debugging activities. The approach and its underlying anomaly scores are extensively evaluated in lab experiments

    Full-Stack Performance Model Evaluation using Probabilistic Garbage Collection Simulation

    Get PDF
    ABSTRACT Performance models can represent the performance relevant aspects of an enterprise application. Corresponding simulation engines use such models for simulating performance metrics (e.g., response times, resource utilization, throughput) and allow for performance evaluations without load testing the actual system. Creating such models manually often outweighs their benefits. Therefore, recent research created performance model generators, which can generate such models out of Application Performance Management software. However, a full-stack evaluation containing all relevant resources of an enterprise application (Central Processing Unit, memory, network and Hard Disk Drive) has not been conducted to the best of our knowledge. This work closes this gap using a pre-release version of the next generation industry benchmark SPECjEnterpriseNEXT of the Standard Performance Evaluation Corporation as example enterprise application, the Palladio Component Model as performance model and the performance model generator of the RETIT Capacity Manager. Furthermore, this work extends the generated model with a probabilistic garbage collection model to simulate memory allocation and releases more accurately
    corecore