10 research outputs found

    Enabling Dynamic Virtual Frequency Scaling for Virtual Machines in the Cloud

    Get PDF
    International audienceWith the democratization of the Cloud paradigm, many applications are developed to be executed inside virtual machines hosted by remote data centers providing an Infrastructureas-a-Service (IaaS). These applications, developed by different users with different goals, tend to have different behaviors, hence a similar treatment on the Cloud provider side seems to be sub-optimal. Indeed, VM are black boxes to which are attached vCPUs, whose frequency are all the same, and are mainly indicative. In our opinion, an important limitation can be noted here. Because the Cloud provider is unaware of the applications that are executed inside the VMs, it has little insight on the behavior of the applications, and how to manage the VMs. For these reasons, Cloud provider can assign too much or too few resources to a VM, and might rely on migration mechanism to cope with that problem. In this paper, we propose to attach a virtual frequency to the VM template, which can be configured by the customer to better describe her expected application requirements, and the associated quality of service. Then, to enforce this virtual frequency, we designed a controller that leverages the Linux cgroup system to dynamically adjust the configuration on the host machine. We evaluate our new controller on a real infrastructure with real CPU-intensive applications executed by VM with different frequencies. We also discuss the benefits of our virtual frequency capping for VM placement

    Ordonnancement multi-objectifs de workflows dans un Cloud privé

    Get PDF
    National audienceCet article adresse le problème d'ordonnancement de workflows scientifiques dans un envi-ronnement de Cloud privé. L'ordonnancement dans ce type d'environnement est un problème d'optimisation multi-objectifs difficile. Généralement, les travaux menés sur l'ordonnancement de workflows se concentrent sur l'ordonnancement dans un Cloud public, et ne considèrent donc pas les différentes limitations de l'infrastructure. Cet article propose d'une part une mo-délisation de l'infrastructure et du problème d'ordonnancement posé en prenant en compte le nombre fini de ressources disponibles, et d'autre part une heuristique permettant de ré-soudre l'ordonnancement de workflows tout en essayant de réduire le nombre de ressources utilisées (e.g. réduction de la consommation énergétique). Les résultats préliminaires obte-nus sur cette contribution sont prometteurs et ouvrent la porte à de nombreuses perspectives

    Online Multi-User Workflow Scheduling Algorithm for Fairness and Energy Optimization

    Get PDF
    International audienceThis article tackles the problem of scheduling multiuser scientific workflows with unpredictable random arrivals and uncertain task execution times in a Cloud environment from the Cloud provider point of view. The solution consists in a deadline sensitive online algorithm, named NEARDEADLINE, that optimizes two metrics: the energy consumption and the fairness between users. Scheduling workflows in a private Cloud environment is a difficult optimization problem as capacity constraints must be fulfilled additionally to dependencies constraints between tasks of the workflows. Furthermore, NEARDEADLINE is built upon a new workflow execution platform. As far as we know no existing work tries to combine both energy consumption and fairness metrics in their optimization problem. The experiments conducted on a real infrastructure (clusters of Grid'5000) demonstrate that the NEARDEADLINE algorithm offers real benefits in reducing energy consumption, and enhancing user fairness

    Prise en compte de l’énergie dans la gestion des workflows scientifiques dans le Cloud : une vision centrée sur le fournisseur de service

    No full text
    Scientific computer simulations are generally very complex and are characterized by many parallel processes. In order to highlight the parts that can be parallelized,and to enable efficient execution, many scientists have chosen to define their applications as workflows. A scientific workflow represents an application as a set of unitary processing tasks, linked by dependencies. Today, because of their low cost, elasticity, and on demand nature, cloud computing services are widely used for workflow execution. Users using this type of environment manage the execution of their workflow, as well as the necessary resources, using standard services such as IaaS (Infrastructure-as-a-Service). However,because cloud services are not specific to the nature of the application to be executed,the use of physical resources is not as optimized as it could be. In this thesis, we propose to move the management and execution of workflows to the cloud provider’s side in order to offer a new type of service dedicated to workflows. This new approach makes it possible to improve resource management and reduce energy consumption and thus the environmental impact of the infrastructure used.Les simulations scientifiques par ordinateur sont généralement très complexes et se caractérisent par de nombreux processus parallèles. Afin de mettre en évidence les parties parallèlisables, et de permettre une exécution efficace, de nombreux scientifiques ont choisi de définir leurs applications sous forme de workflows. Un workflow scientifique représente une application comme un ensemble de tâches de traitement unitaires, liées par des dépendances. De nos jours, grâce à leur faible coût, leur élasticité et leur aspect à la demande, les services de cloud computing sont largement utilisés pour l’exécution de workflows. Les utilisateurs utilisant ce type d’environnement gèrent l’exécution de leur workflow, ainsi que les ressources nécessaires, à l’aide de service standards tel que le IaaS (Infrastructure-as-a-Service). Néanmoins, comme les services de cloud ne sont pas spécifiques à la nature de l’application à exécuter, l’utilisation des ressources physiques n’est pas aussi optimisée qu’elle pourrait l’être. Dans cette thèse, nous proposons de déplacer la gestion et l’exécution des workflows du côté du fournisseur de Cloud afin d’offrir un nouveau type de service dédié aux workflows. Cette nouvelle approche rend possible une amélioration de la gestion des ressources et une réduction de la consommation d’énergie et de ce fait l’impact environnemental de l’infrastructure utilisée

    Prise en compte de l’énergie dans la gestion des workflows scientifiques dans le Cloud : une vision centrée sur le fournisseur de service

    No full text
    Scientific computer simulations are generally very complex and are characterized by many parallel processes. In order to highlight the parts that can be parallelized,and to enable efficient execution, many scientists have chosen to define their applications as workflows. A scientific workflow represents an application as a set of unitary processing tasks, linked by dependencies. Today, because of their low cost, elasticity, and on demand nature, cloud computing services are widely used for workflow execution. Users using this type of environment manage the execution of their workflow, as well as the necessary resources, using standard services such as IaaS (Infrastructure-as-a-Service). However,because cloud services are not specific to the nature of the application to be executed,the use of physical resources is not as optimized as it could be. In this thesis, we propose to move the management and execution of workflows to the cloud provider’s side in order to offer a new type of service dedicated to workflows. This new approach makes it possible to improve resource management and reduce energy consumption and thus the environmental impact of the infrastructure used.Les simulations scientifiques par ordinateur sont généralement très complexes et se caractérisent par de nombreux processus parallèles. Afin de mettre en évidence les parties parallèlisables, et de permettre une exécution efficace, de nombreux scientifiques ont choisi de définir leurs applications sous forme de workflows. Un workflow scientifique représente une application comme un ensemble de tâches de traitement unitaires, liées par des dépendances. De nos jours, grâce à leur faible coût, leur élasticité et leur aspect à la demande, les services de cloud computing sont largement utilisés pour l’exécution de workflows. Les utilisateurs utilisant ce type d’environnement gèrent l’exécution de leur workflow, ainsi que les ressources nécessaires, à l’aide de service standards tel que le IaaS (Infrastructure-as-a-Service). Néanmoins, comme les services de cloud ne sont pas spécifiques à la nature de l’application à exécuter, l’utilisation des ressources physiques n’est pas aussi optimisée qu’elle pourrait l’être. Dans cette thèse, nous proposons de déplacer la gestion et l’exécution des workflows du côté du fournisseur de Cloud afin d’offrir un nouveau type de service dédié aux workflows. Cette nouvelle approche rend possible une amélioration de la gestion des ressources et une réduction de la consommation d’énergie et de ce fait l’impact environnemental de l’infrastructure utilisée

    Enabling Dynamic Virtual Frequency Scaling for Virtual Machines in the Cloud

    Get PDF
    International audienceWith the democratization of the Cloud paradigm, many applications are developed to be executed inside virtual machines hosted by remote data centers providing an Infrastructureas-a-Service (IaaS). These applications, developed by different users with different goals, tend to have different behaviors, hence a similar treatment on the Cloud provider side seems to be sub-optimal. Indeed, VM are black boxes to which are attached vCPUs, whose frequency are all the same, and are mainly indicative. In our opinion, an important limitation can be noted here. Because the Cloud provider is unaware of the applications that are executed inside the VMs, it has little insight on the behavior of the applications, and how to manage the VMs. For these reasons, Cloud provider can assign too much or too few resources to a VM, and might rely on migration mechanism to cope with that problem. In this paper, we propose to attach a virtual frequency to the VM template, which can be configured by the customer to better describe her expected application requirements, and the associated quality of service. Then, to enforce this virtual frequency, we designed a controller that leverages the Linux cgroup system to dynamically adjust the configuration on the host machine. We evaluate our new controller on a real infrastructure with real CPU-intensive applications executed by VM with different frequencies. We also discuss the benefits of our virtual frequency capping for VM placement

    Ordonnancement multi-objectifs de workflows dans un Cloud privé

    No full text
    National audienceCet article adresse le problème d'ordonnancement de workflows scientifiques dans un envi-ronnement de Cloud privé. L'ordonnancement dans ce type d'environnement est un problème d'optimisation multi-objectifs difficile. Généralement, les travaux menés sur l'ordonnancement de workflows se concentrent sur l'ordonnancement dans un Cloud public, et ne considèrent donc pas les différentes limitations de l'infrastructure. Cet article propose d'une part une mo-délisation de l'infrastructure et du problème d'ordonnancement posé en prenant en compte le nombre fini de ressources disponibles, et d'autre part une heuristique permettant de ré-soudre l'ordonnancement de workflows tout en essayant de réduire le nombre de ressources utilisées (e.g. réduction de la consommation énergétique). Les résultats préliminaires obte-nus sur cette contribution sont prometteurs et ouvrent la porte à de nombreuses perspectives

    Handling heterogeneous workflows in the Cloud while enhancing optimizations and performance

    No full text
    International audienceThe goal of a workflow engine is to facilitate the writing, the deploying, and the execution of a scientific workflow (i.e., graph of coarse-grain and heterogeneous tasks) on distributed infrastructures. With the democratization of the Cloud paradigm, many workflow engines of the state of the art offer a way to execute workflows on distant data centers by using the Infrastructure-as-a-Service (IaaS) or the Function-asa- Service (FaaS) services of Cloud providers. Hence, workflowengines can take advantage of the (presumably) infinite resources and the economical model of the Cloud. However, two important limitations lie in this vision of Cloud-oriented workflow engines. First, by using existing services of Cloud providers, and by managing the workflows at the user side, the Cloud providers are unaware of both the workflows and their user needs, and cannot apply specific resource optimizations to their infrastructure. Second, for the same reasons, handling the heterogeneity of tasks (different operating systems) in workflows necessarily degrades either the transparency for the users (who must provision different types of resources), or the completion time performance of the workflows, because of the stacking of virtualization layers. In this paper, we tackle these two limitations by presenting a new Cloud service dedicated to scientific workflows. Unlike existing workflow engines, this service is deployed and managed by theCloud providers, and enables specific resource optimizations and offers a better control of the heterogeneity of the workflows. We evaluate our new service in comparison to Argo, a well-known workflow engine of the literature based on FaaS services. This evaluation was made on a real distributed experimental platform with a realistic and complex scenario

    A workflow scheduling deadline-based heuristic for energy optimization in Cloud

    Get PDF
    International audienceThis article addresses the scheduling of heterogeneous scientific workflows while minimizing the energy consumption of the cloud provider, by introducing a deadline sensitive algorithm. Scheduling in a cloud environment is a difficult optimization problem. Usually, work around the scheduling of scientific workflows focuses on public clouds where infrastructure management is an unknown black box. Thus, many works offer scheduling algorithms designed to select the best set of virtual machines over time, so that the cost to the end user is minimized. This article presents a new HEFT-based algorithm that takes into account users deadlines to minimize the number of machines used by the cloud provider. The results show the real benefits of using our algorithm for reducing the energy consumption of the cloud provider
    corecore