4 research outputs found

    Using Performance Forecasting to Accelerate Elasticity

    Get PDF
    Cloud computing facilitates dynamic resource provisioning. The automation of resource management, known as elasticity, has been subject to much research. In this context, monitoring of a running service plays a crucial role, and adjustments are made when certain thresholds are crossed. On such occasions, it is common practice to simply add or remove resources. In this paper we investigate how we can predict the performance of a service to dynamically adjust allocated resources based on predictions. In other words, instead of “repairing” because a threshold has been crossed, we attempt to stay ahead and allocate an optimized amount of resources in advance. To do so, we need to have accurate predictive models that are based on workloads. We present our approach, based on the Universal Scalability Law, and discuss initial experiments

    Self-scalable Benchmarking as a Service with Automatic Saturation Detection

    Get PDF
    Part 4: ServicesInternational audienceSoftware applications providers have always been required to perform load testing prior to launching new applications. This crucial test phase is expensive in human and hardware terms, and the solutions generally used would benefit from further development. In particular, designing an appropriate load profile to stress an application is difficult and must be done carefully to avoid skewed testing. In addition, static testing platforms are exceedingly complex to set up. New opportunities to ease load testing solutions are becoming available thanks to cloud computing. This paper describes a Benchmark-as-a-Service platform based on: (i) intelligent generation of traffic to the benched application without inducing thrashing (avoiding predefined load profiles), (ii) a virtualized and self-scalable load injection system. This platform was found to reduce the cost of testing by 50% compared to more commonly used solutions. It was experimented on the reference JEE benchmark RUBiS. This involved detecting bottleneck tiers

    Introducing Queuing Network-Based Performance Awareness in Autonomic Systems

    No full text
    International audienceThis paper advocates for the introduction of performance awareness in autonomic systems. The motivation is to be able to predict the performance of a target configuration when a self-* feature is planning a system reconfiguration. We propose a global and partially automated process based on queues and queuing networks models. This process includes decomposing a distributed application into black boxes, identifying the queue model for each black box and assembling these models into a queuing network according to the candidate target configuration. Finally, performance prediction is performed either through simulation or analysis. This paper sketches the global process and focuses on the black box model identification step. This step is automated thanks to a load testing platform enhanced with a workload control loop. Model identification is then based on statistical tests. The model identification process is illustrated by experimental results

    Introduction de fonctionnalités d'auto-optimisation dans une architecture de selfbenchmarking

    Get PDF
    Le Benchmarking des systèmes client-serveur implique des infrastructures techniques réparties complexes, dont la gestion nécessite une approche autonomique. Cette gestion s'appuie sur une suite d'étapes, observation, analyse et rétroaction, qui correspond au principe d'une boucle de contrôle autonome. Des travaux antérieurs dans le domaine du test de performances ont montré comment introduire des fonctionnalités de test autonome par le biais d'une injection de charge auto-régulée. L'objectif de cette thèse est de suivre cette démarche de calcul autonome (autonomic computing) en y introduisant des fonctionnalités d'optimisation autonome. On peut ainsi obtenir automatiquement des résultats de benchmarks fiables et comparables, mettant en oeuvre l'ensemble des étapes de self-benchmarking. Notre contribution est double. D'une part, nous proposons un algorithme original pour l'optimisation dans un contexte de test de performance, qui vise à diminuer le nombre de solutions potentielles à tester, moyennant une hypothèse sur la forme de la fonction qui lie la valeur des paramètres à la performance mesurée. Cet algorithme est indépendant du système à optimiser. Il manipule des paramètres entiers, dont les valeurs sont comprises dans un intervalle donné, avec une granularité de valeur donnée. D'autre part, nous montrons une approche architecturale à composants et une organisation du benchmark automatique en plusieurs boucles de contrôle autonomes (détection de saturation, injection de charge, calcul d'optimisation), coordonnées de manière faiblement couplée via un mode de communication asynchrone de type publication-souscription. Complétant un canevas logiciel à composants pour l'injection de charge auto-régulée, nous y ajoutons des composants pour reparamétrer et redémarrer automatiquement le système à optimiser.Deux séries d'expérimentations ont été menées pour valider notre dispositif d'auto-optimisation. La première série concerne une application web de type achat en ligne, déployée sur un serveur d'application JavaEE. La seconde série concerne une application à trois tiers effectifs (WEB, métier (EJB JOnAS) et base de données) clusterSample. Les trois tiers sont sur des machines physiques distinctes.Benchmarking client-server systems involves complex, distributed technical infrastructures, whose management deserves an autonomic approach. It also relies on observation, analysis and feedback steps that closely matches the autonomic control loop principle. While previous works in performance testing have shown how to introduce autonomic load testing features through self-regulated load injection, the goal of this thesis is to follow this approach of autonomic computing to introduce self-optimization features in this architecture to obtain reliable and comparable benchmark results, and to achieve the fully principle of Self-benchmarking.Our contribution is twofold. From the algorithmic point of view, we propose an original optimization algorithm in the context of performance testing. This algorithm is divided into two parts. The first one concerns the overall level, i.e. the control of the performance index evolution, based on global parameters setting of the system. The second part concerns the search for the optimum when only one parameter is modified. From the software architecture point of view, we complete the Fractal component-based architecture, containing several autonomic control loops (saturation, injection, optimization computing) and we implement the coordination principle between these loops by asynchronous messages according to the publish-subscribe communication paradigm. To apply a given parameters setting on the system under test, we introduced new components Configurators to support the setting of parameters before starting the test process. It may also be necessary to restart all or part of the system to optimize to ensure that the new setting is effectively taken into account. We introduced components Starters to cover this need in a specific way for each system.To validate our self-optimization framework, two types of campaigns have been conducted onto the servers of Orange Labs in Meylan and the servers of the LISTIC Laboratory of the University of Savoie in Polytech Annecy-Chambéry (Annecy le Vieux). The first one is a WEB online shopping application deployed on a Java EE application server JonAS. The second one is a three-tiers application (WEB, business (EJB JOnAS) and data base) clusterSample. The three tiers are in three separate machines.SAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF
    corecore