30 research outputs found

    GENERIC PERFORMANCE PREDICTION FOR ERP AND SOA APPLICATIONS

    Get PDF
    Enterprise systems are business-critical applications, and strongly influence a company’s productivity. In contrast to their importance, their performance behaviour and possible bottlenecks are often unknown. This lack of information can be explained by the complexity of the systems itself, as well as by the complexity and specialization of the existing performance prediction tools. These facts make performance prediction expensive, resulting very often in a “we fix it when we see it” mentality, with taking the risk of system unavailability and inefficient assignment of hardware resources. In order to address the challenges identified above, we developed a performance prediction process to model and simulate the performance behaviour and especially identify performance bottlenecks for SOA applications. In this paper, we present the process and architecture of our approach. To cover a variety of applications the performance is modelled using evolutionary algorithms, while the simulation uses layered queuing networks. Both techniques allow a domain-independent processing. To cope with the resource requirements for delivering prediction results fast, EPPIC automatically acquires cloud resources for performing the modelling and simulation. With its slim user interface EPPIC provides an approach for easy to use performance prediction in a broad application context

    Performance-oriented Cloud Provisioning: Taxonomy and Survey

    Full text link
    Cloud computing is being viewed as the technology of today and the future. Through this paradigm, the customers gain access to shared computing resources located in remote data centers that are hosted by cloud providers (CP). This technology allows for provisioning of various resources such as virtual machines (VM), physical machines, processors, memory, network, storage and software as per the needs of customers. Application providers (AP), who are customers of the CP, deploy applications on the cloud infrastructure and then these applications are used by the end-users. To meet the fluctuating application workload demands, dynamic provisioning is essential and this article provides a detailed literature survey of dynamic provisioning within cloud systems with focus on application performance. The well-known types of provisioning and the associated problems are clearly and pictorially explained and the provisioning terminology is clarified. A very detailed and general cloud provisioning classification is presented, which views provisioning from different perspectives, aiding in understanding the process inside-out. Cloud dynamic provisioning is explained by considering resources, stakeholders, techniques, technologies, algorithms, problems, goals and more.Comment: 14 pages, 3 figures, 3 table

    Integrated performance evaluation of extended queueing network models with line

    Get PDF
    Despite the large literature on queueing theory and its applications, tool support to analyze these models ismostly focused on discrete-event simulation and mean-value analysis (MVA). This circumstance diminishesthe applicability of other types of advanced queueing analysis methods to practical engineering problems,for example analytical methods to extract probability measures useful in learning and inference. In this toolpaper, we present LINE 2.0, an integrated software package to specify and analyze extended queueingnetwork models. This new version of the tool is underpinned by an object-oriented language to declarea fairly broad class of extended queueing networks. These abstractions have been used to integrate in acoherent setting over 40 different simulation-based and analytical solution methods, facilitating their use inapplications

    Managing Dynamic Enterprise and Urgent Workloads on Clouds Using Layered Queuing and Historical Performance Models

    No full text
    The automatic allocation of enterprise workload to resources can be enhanced by being able to make what-if response time predictions whilst different allocations are being considered. We experimentally investigate an historical and a layered queuing performance model and show how they can provide a good level of support for a dynamic-urgent cloud environment. Using this we define, implement and experimentally investigate the effectiveness of a prediction-based cloud workload and resource management algorithm. Based on these experimental analyses we: i.) comparatively evaluate the layered queuing and historical techniques; ii.) evaluate the effectiveness of the management algorithm in different operating scenarios; and iii.) provide guidance on using prediction-based workload and resource management

    ATOM: model-driven autoscaling for microservices

    Get PDF
    Microservices based architectures are increasinglywidespread in the cloud software industry. Still, there is ashortage of auto-scaling methods designed to leverage the uniquefeatures of these architectures, such as the ability to indepen-dently scale a subset of microservices, as well as the ease ofmonitoring their state and reciprocal calls.We propose to address this shortage with ATOM, a model-driven autoscaling controller for microservices. ATOM instanti-ates and solves at run-time a layered queueing network model ofthe application. Computational optimization is used to dynami-cally control the number of replicas for each microservice and itsassociated container CPU share, overall achieving a fine-grainedcontrol of the application capacity at run-time.Experimental results indicate that for heavy workloads ATOMoffers around 30%-37% higher throughput than baseline model-agnostic controllers based on simple static rules. We also find thatmodel-driven reasoning reduces the number of actions needed toscale the system as it reduces the number of bottleneck shiftsthat we observe with model-agnostic controllers

    Scaling Size and Parameter Spaces in Variability-Aware Software Performance Models (T)

    Get PDF
    In software performance engineering, what-if scenarios, architecture optimization, capacity planning, run-time adaptation, and uncertainty management of realistic models typically require the evaluation of many instances. Effective analysis is however hindered by two orthogonal sources of complexity. The first is the infamous problem of state space explosion — the analysis of a single model becomes intractable with its size. The second is due to massive parameter spaces to be explored, but such that computations cannot be reused across model instances. In this paper, we efficiently analyze many queuing models with the distinctive feature of more accurately capturing variability and uncertainty of execution rates by incorporating general (i.e., non-exponential) distributions. Applying product-line engineering methods, we consider a family of models generated by a core that evolves into concrete instances by applying simple delta operations affecting both the topology and the model's parameters. State explosion is tackled by turning to a scalable approximation based on ordinary differential equations. The entire model space is analyzed in a family-based fashion, i.e., at once using an efficient symbolic solution of a super-model that subsumes every concrete instance. Extensive numerical tests show that this is orders of magnitude faster than a naive instance-by-instance analysis

    Learning Queuing Networks by Recurrent Neural Networks

    Full text link
    It is well known that building analytical performance models in practice is difficult because it requires a considerable degree of proficiency in the underlying mathematics. In this paper, we propose a machine-learning approach to derive performance models from data. We focus on queuing networks, and crucially exploit a deterministic approximation of their average dynamics in terms of a compact system of ordinary differential equations. We encode these equations into a recurrent neural network whose weights can be directly related to model parameters. This allows for an interpretable structure of the neural network, which can be trained from system measurements to yield a white-box parameterized model that can be used for prediction purposes such as what-if analyses and capacity planning. Using synthetic models as well as a real case study of a load-balancing system, we show the effectiveness of our technique in yielding models with high predictive power

    Modeling an Adaptive System with Complex Queuing Networks and Simulation

    Get PDF
    An adaptive system differs from a non-adaptive system in that an adaptive system uses a specific process to identify and implement adaptations to system parameters during run time in an effort to increase system performance.;In order to develop an adaptive system one of the most important aspects is the ability to accurately predict and manage system behavior. If an unexpected event has occurred, accurate prediction of system behavior is needed in order to determine whether or not the system is able to continually meet expectations and/or requirements. When considering any possible adaptations to the system, one must be able to accurately predict their consequences as well.;In a feedback loop of an adaptive system performance analysis and prediction performance analysis and prediction may lead to a large number of different states. Therefore, the method of analyzing the system must be fast. For complex systems, however, the most popular method for predicting performance is simulation. Simulations, depending on the size of the system, are known for being slow.;In this thesis we develop a fast method for prediction of performance of a complex system. We use this method to allocate resources to the system initially, and then make decisions for system adaptation during runtime. Finally we test various modifications to the system in order to measure the robustness of our method
    corecore