7 research outputs found
A Survey of Techniques For Improving Energy Efficiency in Embedded Computing Systems
Recent technological advances have greatly improved the performance and
features of embedded systems. With the number of just mobile devices now
reaching nearly equal to the population of earth, embedded systems have truly
become ubiquitous. These trends, however, have also made the task of managing
their power consumption extremely challenging. In recent years, several
techniques have been proposed to address this issue. In this paper, we survey
the techniques for managing power consumption of embedded systems. We discuss
the need of power management and provide a classification of the techniques on
several important parameters to highlight their similarities and differences.
This paper is intended to help the researchers and application-developers in
gaining insights into the working of power management techniques and designing
even more efficient high-performance embedded systems of tomorrow
Mode selection and mode-dependency modeling for power-aware embedded systems
Among the techniques for system-level power management, it is not currently possible to guarantee timing constraints and have a comprehensive system model supporting multiple components at the same time. We propose a new method for modeling and selecting the power modes for the optimal system-power management of embedded systems under timing and power constraints. First, we not only model the modes and the transitions overhead at the component level, but we also capture the application-imposed relationships among the components by introducing a mode dependency graph at the system level. Second, we propose a mode selection technique, which determines when and how to change mode in these components such that the whole system can meet all power and timing constraints. Our constraint-driven approach is a critical feature for exploring power/performance tradeoffs in power-aware embedded systems. We demonstrate the application of our techniques to a low-power sensor and an autonomous rover example.
Mode Selection and Mode-Dependency Modeling for Power-Aware Embedded Systems
Among the many techniques for system-level power management, it is not currently possible to guarantee timing constraints and have a comprehensive system model at the same time. Specifically, dynamic power management (DPM) techniques can model systems consisting of multiple devices with multiple mode settings. However, if hard real-time constraints are required, then DPM techniques are usually not applicable, because they are based on prediction or timeout and are designed for applications without hard constraints. Instead, low-power scheduling (LPS) techniques may be used to satisfy timing constraints, but they assume a single processor with voltage or frequency scaling. These greedy schedulers are not generalizable to multiple processors or devices with more general modes. We propose a new method for modeling and selecting the power modes for the optimal system-power management of embedded systems under timing and power constraints. First, we not only model the modes and the transitions overhead at the component level, but we also capture the application-imposed relationships among the components by introducing a mode dependency graph at the system level. Second, we propose a mode selection technique, which determines when and how to change mode in these components such that the whole system can meet all power and timing constraints. Our constraint-driven approach is a critical feature for exploring power/performance tradeoffs in power-aware embedded systems. We demonstrate the application of our techniques to a low-power sensor and an autonomous rover example.
Optimisation de l'empreinte carbone dans un environnement intercloud : modÚles et méthodes
RĂSUMĂ
Depuis une dizaine dâannĂ©es, la dĂ©matĂ©rialisation de lâinformation connaĂźt un essor particulier avec lâascension du Cloud Computing. La puissance de calcul et de stockage offerte, ainsi que la flexibilitĂ© dâutilisation ont favorisĂ© une externalisation accrue des donnĂ©es vers des serveurs distants. Ces derniers affranchissent les utilisateurs du fardeau de lâoutil informatique traditionnel et leur donnent accĂšs Ă un large Ă©ventail de services en ligne, qui sont des charges modulables, facturĂ©es selon lâutilisation. ParticuliĂšrement, dans le cas du modĂšle dâInfrastructure-service, le client dispose dâune infrastructure physique hĂ©bergĂ©e et peut ainsi louer des serveurs physiques, sur lesquels tourneront ses applications encapsulĂ©es dans des machines virtuelles ou Virtual Machines (VMs).
Toutefois lâĂ©mergence du Cloud et son adoption Ă grande Ă©chelle constituent des dĂ©fis pour les fournisseurs dâinfrastructure. En effet, au-delĂ de lâimplantation et de la configuration des rĂ©seaux physiques, il faut conjuguer avec lâinfrastructure sous-jacente dĂ©jĂ existante, et dĂ©terminer des mĂ©canismes efficaces dâassignation des requĂȘtes des usagers aux serveurs et data centers, contraint par le respect des performances des applications hĂ©bergĂ©es et des exigences de sĂ©curitĂ© imposĂ©es par les clients. La demande sans cesse croissante et le souci de fournir une certaine qualitĂ© de service, obligent les fournisseurs Ă investir dâimportants capitaux afin de multiplier leurs offres dâhĂ©bergement dans plusieurs zones gĂ©ographiques. Avec ce dĂ©ploiement Ă grande Ă©chelle dâĂ©normes data centers, leur utilisation Ă outrance, lâaugmentation des coĂ»ts dâopĂ©ration et de lâĂ©nergie Ă©lectrique, les dĂ©penses dâexploitation ont rapidement dĂ©passĂ© les investissements. De ce fait, plusieurs auteurs se sont penchĂ©s sur le problĂšme de placement des charges dans un environnement Cloud et ont dĂ©veloppĂ© des outils dâaide aux prises de dĂ©cision, basĂ©s concomitamment sur lâaccroissement des profits, une meilleure correspondance entre les besoins essentiels du client et lâinfrastructure disponible, et la maximisation de lâefficacitĂ© et de lâutilisation des ressources.
Bien que le Cloud Computing offre une rĂ©ponse favorable au problĂšme de calcul et de stockage des informations, son adoption Ă grande Ă©chelle est freinĂ©e par les inquiĂ©tudes soulevĂ©es par sa signature Ă©cologique. En effet, lâutilisation excessive des data centers, de mĂȘme que leur gestion et leur entretien, requiĂšrent une Ă©nergie Ă©lectrique accrue se traduisant par une empreinte carbone de plus en plus importante, accĂ©lĂ©rant ainsi le rĂ©chauffement climatique. Ainsi, au moment dâopter pour une solution Cloud, certains usagers questionnent lâimpact environnemental dâun tel choix. Dans ce contexte, afin de favoriser lâexpansion du Cloud, les gĂ©ants de lâinformatique nâont dâautre choix que de confĂ©rer une dimension âvertâ Ă leur infrastructure physique. Ceci se traduit par des techniques dâassignation des charges visant Ă rĂ©duire lâempreinte carbone des data centers afin de faire du Cloud Computing un succĂšs tant technologique quâĂ©cologique.
Plusieurs Ă©tudes portant sur la rĂ©duction de lâempreinte carbone dâun unique data center, ont Ă©tĂ© rĂ©cemment effectuĂ©es en considĂ©rant les techniques dâoptimisation de lâĂ©nergie consommĂ©e. Toutefois, dans le contexte dâun Intercloud, oĂč diffĂ©rents data centers sont gĂ©ographiquement distribuĂ©s et alimentĂ©s par des sources dâĂ©nergie renouvelables ou non, la consommation Ă©nergĂ©tique totale ne saurait reflĂ©ter lâempreinte carbone dudit environnement. En ce sens, des recherches plus poussĂ©es ont portĂ© sur lâoptimisation de lâimpact environnemental dâun InterCloud oĂč lâhĂ©tĂ©rogĂ©nĂ©itĂ© de lâinfrastructure a Ă©tĂ© prise en compte. Cependant, seul le processus de placement des charges a Ă©tĂ© optimisĂ© sans aucune considĂ©ration pour lâamĂ©lioration de lâefficacitĂ© Ă©nergĂ©tique des data centers, pour la rĂ©duction de la consommation des ressources rĂ©seau, ou encore pour les exigences des clients en matiĂšre de performance des applications et de sĂ©curitĂ© des donnĂ©es.
Ă cet effet, cette thĂšse propose un cadre de planification des applications visant Ă minimiser lâempreinte carbone dans un environnement InterCloud. GĂ©nĂ©ralement, le problĂšme est traitĂ© de maniĂšre globale, en combinant le choix de lâemplacement des applications et le routage du trafic associĂ© au problĂšme de gestion du systĂšme de refroidissement dans les diffĂ©rents data centers. Divers aspects, comme la puissance des Ă©quipements de calcul, la consommation des ressources rĂ©seau et lâefficacitĂ© Ă©nergĂ©tique seront simultanĂ©ment optimisĂ©s, sous la contrainte des exigences des clients. Le travail a Ă©tĂ© rĂ©alisĂ© en trois phases.
Dans le premier volet du travail, un modĂšle dâoptimisation du placement des applications simples, Ă VM unique, a Ă©tĂ© dĂ©veloppĂ© afin de rĂ©duire lâimpact Ă©cologique dâun ensemble de data centers. Le modĂšle dâempreinte carbone proposĂ© amĂ©liore les approches de consommation Ă©nergĂ©tique dĂ©jĂ existantes en combinant lâoptimisation du placement des VMs au mĂ©canisme dâaccroissement de lâefficacitĂ© Ă©nergĂ©tique des data centers. Ce dernier processus consiste Ă dĂ©terminer, pour chaque data center actif, la valeur optimale de tempĂ©rature Ă fournir par le systĂšme de refroidissement, de maniĂšre Ă trouver un compromis entre les gains Ă©nergĂ©tiques, associĂ©s au cooling, et lâaugmentation de la puissance des ventilateurs des serveurs, face Ă un accroissement de la tempĂ©rature ambiante. Afin dâajouter un certain rĂ©alisme au modĂšle, les exigences des clients, en termes de performances des applications hĂ©bergĂ©es, ou encore en rapport avec les notions de sĂ©curitĂ© et de redondance, ont Ă©galement Ă©tĂ© considĂ©rĂ©es. Une analyse de la monotonie et de la convexitĂ© du modĂšle non linĂ©aire rĂ©sultant a Ă©tĂ© effectuĂ©e afin de souligner lâimportance entourant la dĂ©termination dâune valeur optimale de tempĂ©rature. Par la suite, le problĂšme a Ă©tĂ© transformĂ© en un modĂšle linĂ©aire et rĂ©solu de maniĂšre optimale avec un solveur mathĂ©matique, Ă lâaide des techniques de programmation linĂ©aire en nombres entiers. Afin de mettre en Ă©vidence la pertinence du modĂšle dâoptimisation proposĂ© en termes de coĂ»t dâempreinte carbone, une analyse de la structure du coĂ»t et de lâimpact de la charge a Ă©tĂ© rĂ©alisĂ©e. Dans le but de mieux apprĂ©cier les rĂ©sultats, une version simplifiĂ©e du modĂšle, exempte de toute exigence du client, a alors Ă©tĂ© considĂ©rĂ©e. Ce mĂȘme modĂšle simplifiĂ© a Ă©galement Ă©tĂ© comparĂ© Ă diffĂ©rentes techniques visant Ă optimiser lâempreinte carbone autant au sein dâun unique data center quâĂ lâĂ©chelle dâun environnement InterCloud. Les rĂ©sultats ont dĂ©montrĂ© que le modĂšle proposĂ© permet de rĂ©duire jusquâĂ 65% le coĂ»t dâempreinte carbone. De plus, afin de souligner lâefficacitĂ© du modĂšle proposĂ© Ă rĂ©aliser un placement des VMs tout en respectant les contraintes de sĂ©curitĂ© et de performances, le modĂšle simplifiĂ© a Ă©tĂ© comparĂ© au modĂšle intĂ©grant les exigences des clients. Bien que le modĂšle sans contraintes gĂ©nĂšre, en gĂ©nĂ©ral des coĂ»ts dâempreinte carbone infĂ©rieurs Ă celui du modĂšle complet, il demeure moins intĂ©ressant Ă considĂ©rer, car le gain dâempreinte carbone rĂ©sultant du processus de consolidation aveugle ne permet pas de contrebalancer le pourcentage de violation des contraintes. Ces rĂ©sultats ont Ă©galement permis de dĂ©montrer les bonnes performances du modĂšle complet comparĂ© Ă sa variante simplifiĂ©e, dans le sens que le premier permet parfois dâobtenir des configurations de coĂ»t identique au modĂšle simplifiĂ©, en sâassurant, de surcroĂźt, du respect des exigences des utilisateurs.
Le mĂ©canisme de placement des VMs est un problĂšme complexe Ă rĂ©soudre. En raison de la nature NP-complet du problĂšme, le temps de calcul croĂźt de maniĂšre exponentielle en fonction des entrĂ©es et seules les instances de petite taille, mĂȘme dans le cas du modĂšle simplifiĂ©, ont pu ĂȘtre rĂ©solues avec la mĂ©thode exacte. Afin de pallier ce problĂšme, nous proposons, dans la seconde Ă©tape de notre travail, une mĂ©thode de rĂ©solution, basĂ©e sur les mĂ©taheuristiques, dans le but dâobtenir des solutions de qualitĂ© en un temps polynomial pour des instances de grande taille. La mĂ©thode de rĂ©solution proposĂ©e dans cet article est basĂ©e sur lâheuristique de Recherche Locale ItĂ©rĂ©e ou Iterated Local Search (ILS), qui implĂ©mente une descente comme mĂ©canisme de recherche locale et, suite Ă lâarrĂȘt prĂ©maturĂ© du processus de descente, effectue des sauts dans lâespace des solutions afin de relancer lâexploration Ă partir dâune nouvelle configuration. Aussi, afin dâaccĂ©lĂ©rer le processus dâĂ©valuation dâune configuration, des fonctions de gains, traduisant la diffĂ©rence entre le coĂ»t de la solution actuelle et celui du voisin considĂ©rĂ©, ont Ă©tĂ© dĂ©terminĂ©es. Divers mĂ©canismes de perturbations ont Ă©galement Ă©tĂ© implĂ©mentĂ©s afin dâĂ©viter le piĂšge des optima locaux. De maniĂšre gĂ©nĂ©rale, les rĂ©sultats prĂ©sentĂ©s sont de deux types : la paramĂ©trisation de la mĂ©thode et lâĂ©valuation des performances de lâalgorithme. La phase de paramĂ©trisation a permis de dĂ©terminer les mĂ©canismes Ă implĂ©menter Ă chaque Ă©tape de lâalgorithme ainsi que la valeur idĂ©ale des paramĂštres clĂ©s. Par la suite, les performances de lâalgorithme ont dâabord Ă©tĂ© comparĂ©es Ă celles obtenues avec la mĂ©thode exacte dĂ©finie au premier volet. Les rĂ©sultats dĂ©montrent que les solutions gĂ©nĂ©rĂ©es par la mĂ©thode proposĂ©e sont en moyenne Ă environ 0.2% de la solution optimale, avec un Ă©cart maximal de 2.6% et un temps dâexĂ©cution moyen infĂ©rieur Ă 3 secondes. Afin dâanalyser les performances gĂ©nĂ©rales de la mĂ©thode proposĂ©e, cette derniĂšre a Ă©tĂ© exĂ©cutĂ©e sur diffĂ©rentes tailles dâinstances du modĂšle et les rĂ©sultats obtenus ont Ă©tĂ© Ă©valuĂ©s par rapport Ă ceux dĂ©coulant de lâimplĂ©mentation de trois mĂ©thodes approchĂ©es retrouvĂ©es dans la littĂ©rature. Les rĂ©sultats ont pu dĂ©montrer que lâheuristique proposĂ©e permet dâĂ©tablir un bon compromis entre la qualitĂ© de la solution et le temps dâexĂ©cution, et peut engendrer une Ă©conomie de coĂ»t de carbone pouvant sâĂ©lever jusquâĂ 34%.
Par ailleurs, des applications de plus en plus complexes, sâĂ©tendant sur plusieurs VMs, se font de plus en plus hĂ©berger dans le Cloud. Elles introduisent des trafics inter-VMs, sollicitant ainsi les ressources rĂ©seau afin de faire transiter lâinformation dâune machine virtuelle ou Virtual Machine (VM) Ă une autre. Or, comme la consommation Ă©nergĂ©tique des ressources rĂ©seau reprĂ©sente environ le quart de la puissance totale dâun data center, il en rĂ©sulte alors que lâimpact Ă©nergĂ©tique de ces Ă©quipements ne saurait ĂȘtre nĂ©gligĂ© plus longtemps, lorsque vient le temps de dĂ©cider de lâemplacement des VMs afin de rĂ©duire lâempreinte Ă©cologique de plusieurs data centers. Dans ce contexte, la derniĂšre phase de notre travail propose une extension du modĂšle dĂ©veloppĂ© Ă la premiĂšre Ă©tape, oĂč lâoptimisation du placement des VMs est combinĂ©e au mĂ©canisme dâamĂ©lioration de lâefficacitĂ© Ă©nergĂ©tique et au routage du trafic. De plus, le processus de routage du trafic Ă©tant Ă©galement NP-complet, la combinaison de ce dernier au mĂ©canisme de placement des VMs rĂ©sulte en un problĂšme encore plus difficile. En ce sens, nous avons Ă©galement proposĂ© une approche de rĂ©solution basĂ©e sur la combinaison de deux mĂ©taheuristiques, soit la Recherche Locale ItĂ©rĂ©e (ILS) et la Recherche Tabou (Tabu Search (TS)). De maniĂšre gĂ©nĂ©rale, la mĂ©thode dĂ©veloppĂ©e implĂ©mente lâalgorithme ILS oĂč le mĂ©canisme de recherche locale est une adaptation de lâheuristique TS. Cette hybridation permet de tirer profit des avantages des deux mĂ©thodes : dâune part, des mĂ©canismes de mĂ©moire Ă court et long terme afin dâĂ©viter les cycles et les optima locaux, et dâautre part, des opĂ©rateurs de perturbation qui relancent lâexploration Ă partir dâune nouvelle configuration de dĂ©part. Dans la phase dâexpĂ©rimentation, autant le modĂšle global que la mĂ©thode de rĂ©solution proposĂ©s ont Ă©tĂ© Ă©valuĂ©s. Le modĂšle global a Ă©tĂ© implĂ©mentĂ© en AMPL/CPLEX en utilisant les techniques de programmation linĂ©aire en nombres entiers et a Ă©tĂ© Ă©valuĂ© par rapport Ă dâautres modĂšles de rĂ©fĂ©rence optimisant un unique objectif Ă la fois. Comme nous nous y attendions, le modĂšle proposĂ© permet dâobtenir de meilleures configurations en termes de coĂ»t dâempreinte carbone, avec un gain pouvant sâĂ©lever jusquâĂ environ 900% pour les instances considĂ©rĂ©es. Toutefois, cette optimisation se fait au prix dâun temps de calcul, en moyenne, relativement plus Ă©levĂ©, en raison de la complexitĂ© du modĂšle proposĂ©. Cependant, lâĂ©conomie en carbone rĂ©alisĂ©e Ă©tant substantiellement plus importante comparĂ©e aux Ă©carts en temps de calcul observĂ©s, les rĂ©sultats ont pu dĂ©montrer la grande efficacitĂ© du modĂšle proposĂ© par rapport aux modĂšles rĂ©alisant une optimisation mono-objectif. La mĂ©thode de rĂ©solution approchĂ©e proposĂ©e a Ă©tĂ© implĂ©mentĂ©e en C++ et des expĂ©riences prĂ©liminaires nous ont permis de dĂ©gager les valeurs optimales des paramĂštres clĂ©s de la mĂ©thode, dont la plupart sont liĂ©es Ă la taille du problĂšme. LâefficacitĂ© de la mĂ©thode, en termes de compromis entre coĂ»t dâempreinte carbone et temps dâexĂ©cution, a dâabord Ă©tĂ© comparĂ©e par rapport aux valeurs de borne infĂ©rieure, pour les instances de petite taille. Les rĂ©sultats montrent que lâalgorithme dĂ©veloppĂ© est en mesure de trouver des solutions, en moyenne, Ă moins de 3% de la borne infĂ©rieure en un temps polynomial, contrairement Ă une croissance exponentielle du temps de calcul pour la mĂ©thode exacte. Pour les plus grandes instances, une comparaison avec diffĂ©rentes mĂ©thodes de rĂ©fĂ©rence a pu dĂ©montrer que lâapproche proposĂ©e est toujours en mesure de trouver les configurations de coĂ»t minimal en un temps rĂ©duit, soulignant ainsi les bonnes performances de lâheuristique dĂ©veloppĂ©e et la justesse au niveau du choix des paramĂštres de simulation qui y sont associĂ©s.
Ces expĂ©riences ont dĂ©montrĂ©, au regard des rĂ©sultats obtenus, que le travail rĂ©alisĂ© permettra dâoffrir, aux fournisseurs de Cloud, des outils efficaces de planification des applications Ă lâĂ©chelle de leurs data centers, afin de mieux faire face aux inquiĂ©tudes soulevĂ©es quant Ă lâimpact Ă©cologique du Cloud Computing sur le bien-ĂȘtre de la planĂšte.----------ABSTRACT
The last decade or so has seen a rapid rise in cloud computing usage, which has led to the dematerialization of data centers. The higher computing power and storage, combined with a greater usage ïŹexibility, have promoted the outsourcing of data to remote servers, allowing users to overcome the burden of traditional IT tools, while having access to a wider range of online services that are charged on based on usage. Particularly, in the case of an infrastructure service model, the client is given access to a physical infrastructure where he can rent physical servers to run their applications, which are encapsulated in VMs.
However, the emergence of cloud service and its wide adoption impose new challenges on infrastructure providers. They have to optimize the underlying existing infrastructure by identifying eïŹcient mechanisms for assigning user requests to servers and data centers, while satisfying performance and security constraints, as imposed by the clients. Faced with an increasing demand, providers have to invest signiïŹcant capital in order to increase their hosting oïŹers in several geographic areas, to provide the required Quality of Service (QoS). The increased use of data centers also has a huge bearing on the operating costs and energy consumption, as operating expenses have quickly exceeded the investment. Therefore, several authors have been tackling the placement of loads in a cloud environment by developing tools to aid in the decision-making. Most of the proposed solutions are guided by ïŹnancial aspects to increase proïŹts by determining the best mapping between the basic needs of the client and the available infrastructure, in order to meet the QoS constraints while maximizing the eïŹciency and the use of resources.
However, while cloud computing represents a great opportunity for both individuals and businesses, its widespread adoption is slowed by concerns regarding its global ecological impact. The excessive use of data centers, as well as their management and maintenance, require huge amounts of electric power, thus accelerating the global warming process. Therefore, before choosing a cloud solution, some users will take into consideration that environmental impact. This, in turn, forces cloud providers to consider the "green" aspect of their infrastructure by developing new ways of assigning loads that reduce the carbon footprint of their data centers in order for cloud computing to be both a technological and an ecological success.
Several works have recently been published that aim to reduce the environmental impact of clouds. The ïŹrst ones have focused on reducing the carbon footprint of a single data center by optimizing the consumed energy. However, in the context of an InterCloud environment composed of diïŹerent data centers that are geographically distributed and powered by renewable energy sources or not, the total energy consumed cannot reïŹect the carbon footprint of that said environment. Thatâs why subsequent research has been focusing on optimizing the environmental impact of an InterCloud where the heterogeneity of the infrastructure is taken into account. However, only the VM placement process has been optimized with no consideration to improving data center energy eïŹciency, network power consumption or customer requirements, as far as application performance and data security are concerned.
To this end, this thesis proposes a framework for assigning applications to an InterCloud with the view of minimizing the carbon footprint of such a computing environment. In order to address this issue, the problem is treated holistically, jointly optimizing the VM placement process, the traïŹc routing and a cooling management technique that considers the dynamic behavior of the IT fans. Various aspects, such the processing power, the network resource consumption and the energy eïŹciency, will be simultaneously optimized, while satisfying customer requirements. The work is carried out in three phases.
First, we propose an optimization model for placing standalone VMs, in order to reduce the environmental impact of a set of data centers. The carbon footprint of the proposed model improves the energy consumption of existing approaches by combining both the optimization of the VM placement process and the energy eïŹciency of data centers. This latter process determines, for each active data center, the optimal temperature to be provided by the cooling system so as to ïŹnd a compromise between the energy gains associated with the cooling and the increased power consumption of the server fans, at high temperatures. In order to add some realism to the model, customer requirements are also considered, in terms of application performance, security and redundancy. An analysis of the monotony and the convexity of the resulting nonlinear model was conducted to highlight the importance surrounding the determination of the optimal temperature value. Subsequently, the problem is transformed into a linear model and solved optimally with a mathematical solver, using integer linear programming. To demonstrate the relevance of the proposed optimization model in terms of carbon footprint, an analysis of the cost structure and the impact of the load is carried out. In order to better highlight the results, a simpliïŹed version of the model, free from any client requirements, is also considered. This same simpliïŹed model is also compared with diïŹerent other techniques that optimize the carbon footprint both within a single data center and across an InterCloud environment. The results demonstrate that the proposed model can yield savings of up to 65%, in terms of carbon footprint cost. In addition, to highlight the eïŹectiveness of the proposed model when placing VMs while satisfying clients and performance constraints, the simpliïŹed model is compared with the global model that incorporates customer requirements. Although the model without constraints usually generates smaller carbon footprint costs, its result is not as interesting as it may seem, because these savings do not oïŹset the cost of violating constraints. These results also demonstrate the good performance of the global model compared to its simpliïŹed variant, in the sense that the ïŹrst sometimes provides conïŹgurations identical to the simpliïŹed cost model, while ensuring that user requirements are met.
Placing VMs is a complex problem to solve. Due to its NP-complete nature, the computing time grows exponentially in the length of the inputs. Even with the simpliïŹed model, only small instances can be solved with the exact method. In order to overcome this problem, we propose, in the second stage of our work, a resolution method based on metaheuristics in order to obtain good solutions for large instances in a polynomial time. The method proposed in this article is based on the ILS heuristic that implements a descent as a local search mechanism and, following the early termination of the descent, performs jumps in the solution space to restart the algorithm from a new conïŹguration. Furthermore, in order to speed up the evaluation of a conïŹguration, gain functions that reïŹect the cost diïŹerence between the current solution and the considered neighbor, are determined. Various perturbation mechanisms are also implemented to avoid the trap of local optima. In general, the presented results are of two types: the parametrization of the method and the performance evaluation of the proposed algorithm. The parametrization phase helps determine the mechanisms to implement for each step of the algorithm as well as the ideal value of each