35 research outputs found

    Optimisation de l'empreinte carbone dans un environnement intercloud : modÚles et méthodes

    Get PDF
    RÉSUMÉ Depuis une dizaine d’annĂ©es, la dĂ©matĂ©rialisation de l’information connaĂźt un essor particulier avec l’ascension du Cloud Computing. La puissance de calcul et de stockage offerte, ainsi que la flexibilitĂ© d’utilisation ont favorisĂ© une externalisation accrue des donnĂ©es vers des serveurs distants. Ces derniers affranchissent les utilisateurs du fardeau de l’outil informatique traditionnel et leur donnent accĂšs Ă  un large Ă©ventail de services en ligne, qui sont des charges modulables, facturĂ©es selon l’utilisation. ParticuliĂšrement, dans le cas du modĂšle d’Infrastructure-service, le client dispose d’une infrastructure physique hĂ©bergĂ©e et peut ainsi louer des serveurs physiques, sur lesquels tourneront ses applications encapsulĂ©es dans des machines virtuelles ou Virtual Machines (VMs). Toutefois l’émergence du Cloud et son adoption Ă  grande Ă©chelle constituent des dĂ©fis pour les fournisseurs d’infrastructure. En effet, au-delĂ  de l’implantation et de la configuration des rĂ©seaux physiques, il faut conjuguer avec l’infrastructure sous-jacente dĂ©jĂ  existante, et dĂ©terminer des mĂ©canismes efficaces d’assignation des requĂȘtes des usagers aux serveurs et data centers, contraint par le respect des performances des applications hĂ©bergĂ©es et des exigences de sĂ©curitĂ© imposĂ©es par les clients. La demande sans cesse croissante et le souci de fournir une certaine qualitĂ© de service, obligent les fournisseurs Ă  investir d’importants capitaux afin de multiplier leurs offres d’hĂ©bergement dans plusieurs zones gĂ©ographiques. Avec ce dĂ©ploiement Ă  grande Ă©chelle d’énormes data centers, leur utilisation Ă  outrance, l’augmentation des coĂ»ts d’opĂ©ration et de l’énergie Ă©lectrique, les dĂ©penses d’exploitation ont rapidement dĂ©passĂ© les investissements. De ce fait, plusieurs auteurs se sont penchĂ©s sur le problĂšme de placement des charges dans un environnement Cloud et ont dĂ©veloppĂ© des outils d’aide aux prises de dĂ©cision, basĂ©s concomitamment sur l’accroissement des profits, une meilleure correspondance entre les besoins essentiels du client et l’infrastructure disponible, et la maximisation de l’efficacitĂ© et de l’utilisation des ressources. Bien que le Cloud Computing offre une rĂ©ponse favorable au problĂšme de calcul et de stockage des informations, son adoption Ă  grande Ă©chelle est freinĂ©e par les inquiĂ©tudes soulevĂ©es par sa signature Ă©cologique. En effet, l’utilisation excessive des data centers, de mĂȘme que leur gestion et leur entretien, requiĂšrent une Ă©nergie Ă©lectrique accrue se traduisant par une empreinte carbone de plus en plus importante, accĂ©lĂ©rant ainsi le rĂ©chauffement climatique. Ainsi, au moment d’opter pour une solution Cloud, certains usagers questionnent l’impact environnemental d’un tel choix. Dans ce contexte, afin de favoriser l’expansion du Cloud, les gĂ©ants de l’informatique n’ont d’autre choix que de confĂ©rer une dimension “vert” Ă  leur infrastructure physique. Ceci se traduit par des techniques d’assignation des charges visant Ă  rĂ©duire l’empreinte carbone des data centers afin de faire du Cloud Computing un succĂšs tant technologique qu’écologique. Plusieurs Ă©tudes portant sur la rĂ©duction de l’empreinte carbone d’un unique data center, ont Ă©tĂ© rĂ©cemment effectuĂ©es en considĂ©rant les techniques d’optimisation de l’énergie consommĂ©e. Toutefois, dans le contexte d’un Intercloud, oĂč diffĂ©rents data centers sont gĂ©ographiquement distribuĂ©s et alimentĂ©s par des sources d’énergie renouvelables ou non, la consommation Ă©nergĂ©tique totale ne saurait reflĂ©ter l’empreinte carbone dudit environnement. En ce sens, des recherches plus poussĂ©es ont portĂ© sur l’optimisation de l’impact environnemental d’un InterCloud oĂč l’hĂ©tĂ©rogĂ©nĂ©itĂ© de l’infrastructure a Ă©tĂ© prise en compte. Cependant, seul le processus de placement des charges a Ă©tĂ© optimisĂ© sans aucune considĂ©ration pour l’amĂ©lioration de l’efficacitĂ© Ă©nergĂ©tique des data centers, pour la rĂ©duction de la consommation des ressources rĂ©seau, ou encore pour les exigences des clients en matiĂšre de performance des applications et de sĂ©curitĂ© des donnĂ©es. À cet effet, cette thĂšse propose un cadre de planification des applications visant Ă  minimiser l’empreinte carbone dans un environnement InterCloud. GĂ©nĂ©ralement, le problĂšme est traitĂ© de maniĂšre globale, en combinant le choix de l’emplacement des applications et le routage du trafic associĂ© au problĂšme de gestion du systĂšme de refroidissement dans les diffĂ©rents data centers. Divers aspects, comme la puissance des Ă©quipements de calcul, la consommation des ressources rĂ©seau et l’efficacitĂ© Ă©nergĂ©tique seront simultanĂ©ment optimisĂ©s, sous la contrainte des exigences des clients. Le travail a Ă©tĂ© rĂ©alisĂ© en trois phases. Dans le premier volet du travail, un modĂšle d’optimisation du placement des applications simples, Ă  VM unique, a Ă©tĂ© dĂ©veloppĂ© afin de rĂ©duire l’impact Ă©cologique d’un ensemble de data centers. Le modĂšle d’empreinte carbone proposĂ© amĂ©liore les approches de consommation Ă©nergĂ©tique dĂ©jĂ  existantes en combinant l’optimisation du placement des VMs au mĂ©canisme d’accroissement de l’efficacitĂ© Ă©nergĂ©tique des data centers. Ce dernier processus consiste Ă  dĂ©terminer, pour chaque data center actif, la valeur optimale de tempĂ©rature Ă  fournir par le systĂšme de refroidissement, de maniĂšre Ă  trouver un compromis entre les gains Ă©nergĂ©tiques, associĂ©s au cooling, et l’augmentation de la puissance des ventilateurs des serveurs, face Ă  un accroissement de la tempĂ©rature ambiante. Afin d’ajouter un certain rĂ©alisme au modĂšle, les exigences des clients, en termes de performances des applications hĂ©bergĂ©es, ou encore en rapport avec les notions de sĂ©curitĂ© et de redondance, ont Ă©galement Ă©tĂ© considĂ©rĂ©es. Une analyse de la monotonie et de la convexitĂ© du modĂšle non linĂ©aire rĂ©sultant a Ă©tĂ© effectuĂ©e afin de souligner l’importance entourant la dĂ©termination d’une valeur optimale de tempĂ©rature. Par la suite, le problĂšme a Ă©tĂ© transformĂ© en un modĂšle linĂ©aire et rĂ©solu de maniĂšre optimale avec un solveur mathĂ©matique, Ă  l’aide des techniques de programmation linĂ©aire en nombres entiers. Afin de mettre en Ă©vidence la pertinence du modĂšle d’optimisation proposĂ© en termes de coĂ»t d’empreinte carbone, une analyse de la structure du coĂ»t et de l’impact de la charge a Ă©tĂ© rĂ©alisĂ©e. Dans le but de mieux apprĂ©cier les rĂ©sultats, une version simplifiĂ©e du modĂšle, exempte de toute exigence du client, a alors Ă©tĂ© considĂ©rĂ©e. Ce mĂȘme modĂšle simplifiĂ© a Ă©galement Ă©tĂ© comparĂ© Ă  diffĂ©rentes techniques visant Ă  optimiser l’empreinte carbone autant au sein d’un unique data center qu’à l’échelle d’un environnement InterCloud. Les rĂ©sultats ont dĂ©montrĂ© que le modĂšle proposĂ© permet de rĂ©duire jusqu’à 65% le coĂ»t d’empreinte carbone. De plus, afin de souligner l’efficacitĂ© du modĂšle proposĂ© Ă  rĂ©aliser un placement des VMs tout en respectant les contraintes de sĂ©curitĂ© et de performances, le modĂšle simplifiĂ© a Ă©tĂ© comparĂ© au modĂšle intĂ©grant les exigences des clients. Bien que le modĂšle sans contraintes gĂ©nĂšre, en gĂ©nĂ©ral des coĂ»ts d’empreinte carbone infĂ©rieurs Ă  celui du modĂšle complet, il demeure moins intĂ©ressant Ă  considĂ©rer, car le gain d’empreinte carbone rĂ©sultant du processus de consolidation aveugle ne permet pas de contrebalancer le pourcentage de violation des contraintes. Ces rĂ©sultats ont Ă©galement permis de dĂ©montrer les bonnes performances du modĂšle complet comparĂ© Ă  sa variante simplifiĂ©e, dans le sens que le premier permet parfois d’obtenir des configurations de coĂ»t identique au modĂšle simplifiĂ©, en s’assurant, de surcroĂźt, du respect des exigences des utilisateurs. Le mĂ©canisme de placement des VMs est un problĂšme complexe Ă  rĂ©soudre. En raison de la nature NP-complet du problĂšme, le temps de calcul croĂźt de maniĂšre exponentielle en fonction des entrĂ©es et seules les instances de petite taille, mĂȘme dans le cas du modĂšle simplifiĂ©, ont pu ĂȘtre rĂ©solues avec la mĂ©thode exacte. Afin de pallier ce problĂšme, nous proposons, dans la seconde Ă©tape de notre travail, une mĂ©thode de rĂ©solution, basĂ©e sur les mĂ©taheuristiques, dans le but d’obtenir des solutions de qualitĂ© en un temps polynomial pour des instances de grande taille. La mĂ©thode de rĂ©solution proposĂ©e dans cet article est basĂ©e sur l’heuristique de Recherche Locale ItĂ©rĂ©e ou Iterated Local Search (ILS), qui implĂ©mente une descente comme mĂ©canisme de recherche locale et, suite Ă  l’arrĂȘt prĂ©maturĂ© du processus de descente, effectue des sauts dans l’espace des solutions afin de relancer l’exploration Ă  partir d’une nouvelle configuration. Aussi, afin d’accĂ©lĂ©rer le processus d’évaluation d’une configuration, des fonctions de gains, traduisant la diffĂ©rence entre le coĂ»t de la solution actuelle et celui du voisin considĂ©rĂ©, ont Ă©tĂ© dĂ©terminĂ©es. Divers mĂ©canismes de perturbations ont Ă©galement Ă©tĂ© implĂ©mentĂ©s afin d’éviter le piĂšge des optima locaux. De maniĂšre gĂ©nĂ©rale, les rĂ©sultats prĂ©sentĂ©s sont de deux types : la paramĂ©trisation de la mĂ©thode et l’évaluation des performances de l’algorithme. La phase de paramĂ©trisation a permis de dĂ©terminer les mĂ©canismes Ă  implĂ©menter Ă  chaque Ă©tape de l’algorithme ainsi que la valeur idĂ©ale des paramĂštres clĂ©s. Par la suite, les performances de l’algorithme ont d’abord Ă©tĂ© comparĂ©es Ă  celles obtenues avec la mĂ©thode exacte dĂ©finie au premier volet. Les rĂ©sultats dĂ©montrent que les solutions gĂ©nĂ©rĂ©es par la mĂ©thode proposĂ©e sont en moyenne Ă  environ 0.2% de la solution optimale, avec un Ă©cart maximal de 2.6% et un temps d’exĂ©cution moyen infĂ©rieur Ă  3 secondes. Afin d’analyser les performances gĂ©nĂ©rales de la mĂ©thode proposĂ©e, cette derniĂšre a Ă©tĂ© exĂ©cutĂ©e sur diffĂ©rentes tailles d’instances du modĂšle et les rĂ©sultats obtenus ont Ă©tĂ© Ă©valuĂ©s par rapport Ă  ceux dĂ©coulant de l’implĂ©mentation de trois mĂ©thodes approchĂ©es retrouvĂ©es dans la littĂ©rature. Les rĂ©sultats ont pu dĂ©montrer que l’heuristique proposĂ©e permet d’établir un bon compromis entre la qualitĂ© de la solution et le temps d’exĂ©cution, et peut engendrer une Ă©conomie de coĂ»t de carbone pouvant s’élever jusqu’à 34%. Par ailleurs, des applications de plus en plus complexes, s’étendant sur plusieurs VMs, se font de plus en plus hĂ©berger dans le Cloud. Elles introduisent des trafics inter-VMs, sollicitant ainsi les ressources rĂ©seau afin de faire transiter l’information d’une machine virtuelle ou Virtual Machine (VM) Ă  une autre. Or, comme la consommation Ă©nergĂ©tique des ressources rĂ©seau reprĂ©sente environ le quart de la puissance totale d’un data center, il en rĂ©sulte alors que l’impact Ă©nergĂ©tique de ces Ă©quipements ne saurait ĂȘtre nĂ©gligĂ© plus longtemps, lorsque vient le temps de dĂ©cider de l’emplacement des VMs afin de rĂ©duire l’empreinte Ă©cologique de plusieurs data centers. Dans ce contexte, la derniĂšre phase de notre travail propose une extension du modĂšle dĂ©veloppĂ© Ă  la premiĂšre Ă©tape, oĂč l’optimisation du placement des VMs est combinĂ©e au mĂ©canisme d’amĂ©lioration de l’efficacitĂ© Ă©nergĂ©tique et au routage du trafic. De plus, le processus de routage du trafic Ă©tant Ă©galement NP-complet, la combinaison de ce dernier au mĂ©canisme de placement des VMs rĂ©sulte en un problĂšme encore plus difficile. En ce sens, nous avons Ă©galement proposĂ© une approche de rĂ©solution basĂ©e sur la combinaison de deux mĂ©taheuristiques, soit la Recherche Locale ItĂ©rĂ©e (ILS) et la Recherche Tabou (Tabu Search (TS)). De maniĂšre gĂ©nĂ©rale, la mĂ©thode dĂ©veloppĂ©e implĂ©mente l’algorithme ILS oĂč le mĂ©canisme de recherche locale est une adaptation de l’heuristique TS. Cette hybridation permet de tirer profit des avantages des deux mĂ©thodes : d’une part, des mĂ©canismes de mĂ©moire Ă  court et long terme afin d’éviter les cycles et les optima locaux, et d’autre part, des opĂ©rateurs de perturbation qui relancent l’exploration Ă  partir d’une nouvelle configuration de dĂ©part. Dans la phase d’expĂ©rimentation, autant le modĂšle global que la mĂ©thode de rĂ©solution proposĂ©s ont Ă©tĂ© Ă©valuĂ©s. Le modĂšle global a Ă©tĂ© implĂ©mentĂ© en AMPL/CPLEX en utilisant les techniques de programmation linĂ©aire en nombres entiers et a Ă©tĂ© Ă©valuĂ© par rapport Ă  d’autres modĂšles de rĂ©fĂ©rence optimisant un unique objectif Ă  la fois. Comme nous nous y attendions, le modĂšle proposĂ© permet d’obtenir de meilleures configurations en termes de coĂ»t d’empreinte carbone, avec un gain pouvant s’élever jusqu’à environ 900% pour les instances considĂ©rĂ©es. Toutefois, cette optimisation se fait au prix d’un temps de calcul, en moyenne, relativement plus Ă©levĂ©, en raison de la complexitĂ© du modĂšle proposĂ©. Cependant, l’économie en carbone rĂ©alisĂ©e Ă©tant substantiellement plus importante comparĂ©e aux Ă©carts en temps de calcul observĂ©s, les rĂ©sultats ont pu dĂ©montrer la grande efficacitĂ© du modĂšle proposĂ© par rapport aux modĂšles rĂ©alisant une optimisation mono-objectif. La mĂ©thode de rĂ©solution approchĂ©e proposĂ©e a Ă©tĂ© implĂ©mentĂ©e en C++ et des expĂ©riences prĂ©liminaires nous ont permis de dĂ©gager les valeurs optimales des paramĂštres clĂ©s de la mĂ©thode, dont la plupart sont liĂ©es Ă  la taille du problĂšme. L’efficacitĂ© de la mĂ©thode, en termes de compromis entre coĂ»t d’empreinte carbone et temps d’exĂ©cution, a d’abord Ă©tĂ© comparĂ©e par rapport aux valeurs de borne infĂ©rieure, pour les instances de petite taille. Les rĂ©sultats montrent que l’algorithme dĂ©veloppĂ© est en mesure de trouver des solutions, en moyenne, Ă  moins de 3% de la borne infĂ©rieure en un temps polynomial, contrairement Ă  une croissance exponentielle du temps de calcul pour la mĂ©thode exacte. Pour les plus grandes instances, une comparaison avec diffĂ©rentes mĂ©thodes de rĂ©fĂ©rence a pu dĂ©montrer que l’approche proposĂ©e est toujours en mesure de trouver les configurations de coĂ»t minimal en un temps rĂ©duit, soulignant ainsi les bonnes performances de l’heuristique dĂ©veloppĂ©e et la justesse au niveau du choix des paramĂštres de simulation qui y sont associĂ©s. Ces expĂ©riences ont dĂ©montrĂ©, au regard des rĂ©sultats obtenus, que le travail rĂ©alisĂ© permettra d’offrir, aux fournisseurs de Cloud, des outils efficaces de planification des applications Ă  l’échelle de leurs data centers, afin de mieux faire face aux inquiĂ©tudes soulevĂ©es quant Ă  l’impact Ă©cologique du Cloud Computing sur le bien-ĂȘtre de la planĂšte.----------ABSTRACT The last decade or so has seen a rapid rise in cloud computing usage, which has led to the dematerialization of data centers. The higher computing power and storage, combined with a greater usage ïŹ‚exibility, have promoted the outsourcing of data to remote servers, allowing users to overcome the burden of traditional IT tools, while having access to a wider range of online services that are charged on based on usage. Particularly, in the case of an infrastructure service model, the client is given access to a physical infrastructure where he can rent physical servers to run their applications, which are encapsulated in VMs. However, the emergence of cloud service and its wide adoption impose new challenges on infrastructure providers. They have to optimize the underlying existing infrastructure by identifying eïŹƒcient mechanisms for assigning user requests to servers and data centers, while satisfying performance and security constraints, as imposed by the clients. Faced with an increasing demand, providers have to invest signiïŹcant capital in order to increase their hosting oïŹ€ers in several geographic areas, to provide the required Quality of Service (QoS). The increased use of data centers also has a huge bearing on the operating costs and energy consumption, as operating expenses have quickly exceeded the investment. Therefore, several authors have been tackling the placement of loads in a cloud environment by developing tools to aid in the decision-making. Most of the proposed solutions are guided by ïŹnancial aspects to increase proïŹts by determining the best mapping between the basic needs of the client and the available infrastructure, in order to meet the QoS constraints while maximizing the eïŹƒciency and the use of resources. However, while cloud computing represents a great opportunity for both individuals and businesses, its widespread adoption is slowed by concerns regarding its global ecological impact. The excessive use of data centers, as well as their management and maintenance, require huge amounts of electric power, thus accelerating the global warming process. Therefore, before choosing a cloud solution, some users will take into consideration that environmental impact. This, in turn, forces cloud providers to consider the "green" aspect of their infrastructure by developing new ways of assigning loads that reduce the carbon footprint of their data centers in order for cloud computing to be both a technological and an ecological success. Several works have recently been published that aim to reduce the environmental impact of clouds. The ïŹrst ones have focused on reducing the carbon footprint of a single data center by optimizing the consumed energy. However, in the context of an InterCloud environment composed of diïŹ€erent data centers that are geographically distributed and powered by renewable energy sources or not, the total energy consumed cannot reïŹ‚ect the carbon footprint of that said environment. That’s why subsequent research has been focusing on optimizing the environmental impact of an InterCloud where the heterogeneity of the infrastructure is taken into account. However, only the VM placement process has been optimized with no consideration to improving data center energy eïŹƒciency, network power consumption or customer requirements, as far as application performance and data security are concerned. To this end, this thesis proposes a framework for assigning applications to an InterCloud with the view of minimizing the carbon footprint of such a computing environment. In order to address this issue, the problem is treated holistically, jointly optimizing the VM placement process, the traïŹƒc routing and a cooling management technique that considers the dynamic behavior of the IT fans. Various aspects, such the processing power, the network resource consumption and the energy eïŹƒciency, will be simultaneously optimized, while satisfying customer requirements. The work is carried out in three phases. First, we propose an optimization model for placing standalone VMs, in order to reduce the environmental impact of a set of data centers. The carbon footprint of the proposed model improves the energy consumption of existing approaches by combining both the optimization of the VM placement process and the energy eïŹƒciency of data centers. This latter process determines, for each active data center, the optimal temperature to be provided by the cooling system so as to ïŹnd a compromise between the energy gains associated with the cooling and the increased power consumption of the server fans, at high temperatures. In order to add some realism to the model, customer requirements are also considered, in terms of application performance, security and redundancy. An analysis of the monotony and the convexity of the resulting nonlinear model was conducted to highlight the importance surrounding the determination of the optimal temperature value. Subsequently, the problem is transformed into a linear model and solved optimally with a mathematical solver, using integer linear programming. To demonstrate the relevance of the proposed optimization model in terms of carbon footprint, an analysis of the cost structure and the impact of the load is carried out. In order to better highlight the results, a simpliïŹed version of the model, free from any client requirements, is also considered. This same simpliïŹed model is also compared with diïŹ€erent other techniques that optimize the carbon footprint both within a single data center and across an InterCloud environment. The results demonstrate that the proposed model can yield savings of up to 65%, in terms of carbon footprint cost. In addition, to highlight the eïŹ€ectiveness of the proposed model when placing VMs while satisfying clients and performance constraints, the simpliïŹed model is compared with the global model that incorporates customer requirements. Although the model without constraints usually generates smaller carbon footprint costs, its result is not as interesting as it may seem, because these savings do not oïŹ€set the cost of violating constraints. These results also demonstrate the good performance of the global model compared to its simpliïŹed variant, in the sense that the ïŹrst sometimes provides conïŹgurations identical to the simpliïŹed cost model, while ensuring that user requirements are met. Placing VMs is a complex problem to solve. Due to its NP-complete nature, the computing time grows exponentially in the length of the inputs. Even with the simpliïŹed model, only small instances can be solved with the exact method. In order to overcome this problem, we propose, in the second stage of our work, a resolution method based on metaheuristics in order to obtain good solutions for large instances in a polynomial time. The method proposed in this article is based on the ILS heuristic that implements a descent as a local search mechanism and, following the early termination of the descent, performs jumps in the solution space to restart the algorithm from a new conïŹguration. Furthermore, in order to speed up the evaluation of a conïŹguration, gain functions that reïŹ‚ect the cost diïŹ€erence between the current solution and the considered neighbor, are determined. Various perturbation mechanisms are also implemented to avoid the trap of local optima. In general, the presented results are of two types: the parametrization of the method and the performance evaluation of the proposed algorithm. The parametrization phase helps determine the mechanisms to implement for each step of the algorithm as well as the ideal value of each

    Strategies for Increased Energy Awareness in Cloud Federations

    Get PDF
    This chapter first identifies three scenarios that current energy aware cloud solutions cannot handle as isolated IaaS, but their federative efforts offer opportunities to be explored. These scenarios are centered around: (i) multi-datacenter cloud operator, (ii) commercial cloud federations, (iii) academic cloud federations. Based on these scenarios, we identify energy-aware scheduling policies to be applied in the management solutions of cloud federations. Among others, these policies should consider the behavior of independent administrative domains, the frequently contradicting goals of the participating clouds and federation wide energy consumption

    Optimisation de l'intĂ©gration des requĂȘtes de rĂ©seaux virtuels dans un environnement multiCloud

    Get PDF
    De nos jours, l’Infrastructure-service ou Infrastructure as a Service (IaaS) est devenue le modĂšle de service du Cloud Computing le plus largement adoptĂ©. Dans ce modĂšle d’affaires, un fournisseur de service ou Service Provider (SP) peut louer, Ă  partir d’un ou de plusieurs fournisseurs d’infrastructure ou Cloud Providers (CPs), des ressources physiques proposĂ©es en tant que services (calcul, stockage, accĂšs rĂ©seau, routage, etc.). Ces derniers sont encapsulĂ©s dans des machines virtuelles ou Virtual Machines (VMs), interconnectĂ©es et assemblĂ©es sous forme de requĂȘte de rĂ©seau virtuel ou Virual Network Request (VNR), dans le but de crĂ©er des rĂ©seaux virtuels hĂ©tĂ©rogĂšnes offrant des applications et des services personnalisĂ©s Ă  des utilisateurs finaux. MalgrĂ© son adoption largement rĂ©ussie, le modĂšle IaaS reste toujours confrontĂ© Ă  un dĂ©fi fondamental en matiĂšre de gestion de ressources, qui consiste en l’optimisation de l’intĂ©gration efficace et dynamique des VNRs dans les infrastructures sous-jacentes distribuĂ©es et partagĂ©es. En effet, des ressources hĂ©tĂ©rogĂšnes doivent ĂȘtre efficacement allouĂ©es afin de pouvoir hĂ©berger les VMs dans des centres de donnĂ©es ou data centers (DCs) spĂ©cifiques, et de faire router les liaisons virtuelles ou Virtual Links (VLs), reprĂ©sentant le trafic Ă©changĂ© entre les VMs interconnectĂ©es, sur des chemins appropriĂ©s entre les DCs. Cette allocation de ressources et de services vise gĂ©nĂ©ralement Ă  satisfaire des contraintes de performance, de QualitĂ© de Service (QdS), de sĂ©curitĂ© et de localisation gĂ©ographique, imposĂ©es par le SP. Dans le contexte de la virtualisation de rĂ©seau, ce problĂšme est connu NP-difficile, sous le nom d’intĂ©gration de rĂ©seau virtuel ou Virtual Network Embedding (VNE), qui n’a Ă©tĂ© abordĂ© que rĂ©cemment dans la littĂ©rature dans le cadre d’un rĂ©seau multiCloud, oĂč les infrastructures Cloud sous-jacents appartiennent Ă  diffĂ©rents CPs indĂ©pendants. Le VNE dans un environnement multiCloud ajoute plus de complexitĂ© et des dĂ©fis d’évolutivititĂ© au problĂšme, car l’ensemble du processus nĂ©cessite une approche de rĂ©solution hiĂ©rarchique, dans laquelle deux phases principales d’opĂ©ration sont rĂ©alisĂ©es, chacune ayant des objectifs diffĂ©rents selon les acteurs : la phase de partitionnement des VNRs Ă  travers le rĂ©seau multiCloud, suivie de la phase d’intĂ©gration des segments de VNRs dans les infrastructures intraCloud sĂ©lectionnĂ©es. Dans la premiĂšre phase rĂ©alisĂ©e indirectement par le SP, ce dernier mandate gĂ©nĂ©ralement un fournisseur de rĂ©seau virtuel ou Virtual Network Provider (VNP). Le VNP agit en tant que service de courtage virtuel pour le compte du SP, afin de sĂ©lectionner adĂ©quatement des CPs capables de rĂ©pondre efficacement aux objectifs et exigences du SP, puis partitionne les VNRs en plusieurs segments. Dans la deuxiĂšme phase, qui correspond notamment au problĂšme bien connu du VNE dans le cadre d’un seul CP et qui a Ă©tĂ© largement abordĂ© dans des travaux de recherche antĂ©rieurs, chaque CP sĂ©lectionnĂ© utilise une approche d’hĂ©bergement adĂ©quate pour intĂ©grer les segments de VNRs qui lui sont attribuĂ©s dans son rĂ©seau intraCloud.----------ABSTRACT: Nowadays, the Infrastructure as a Service (IaaS) has become the most widely adopted cloud service model. In this business paradigm, a Service Provider (SP) can lease, from one or more Cloud Providers (CPs), infrastructure layer resources (processing, storage, network access, routing services, etc.) packaged into interconnected virtual machines (VMs) and assembled as a virtual network request (VNR), in order to build heterogeneous virtual networks that will offer customized services and applications to its end users. Despite its successful adoption, the IaaS model faces a fundamental resource management challenge lying in the efficient and dynamic embedding of VNRs onto distributed and shared substrate infrastructures. Heterogenous resources need to be efficiently allocated to host VMs in specific substrate data centers (DCs) and to route virtual links (VLs), representing the exchanged traffic between interconnected VMs, onto suitable substrate paths between the hosting DCs, in order to satisfy performance, Quality of Service (QoS), security and geographical location constraints imposed by the SP. In the context of network virtualization, this issue is usually referred to as the NP-hard Virtual Network Embedding (VNE) problem, which has been only recently addressed in the literature within a multicloud network, where the substrate infrastructures are owned by different and independent CPs. Such a context adds more complexity and scalability issues, since the whole VNE process requires a hierarchical resolution approach, where two major phases of operation are performed, each of them having different purposes according to the acting player: the multicloud VNRs splitting phase, followed by the intra-cloud VNR segments mapping phase. In the first phase played indirectly by the SP, the latter generally mandates a Virtual Network Provider (VNP), which acts as a virtual brokerage service on behalf of the SP, in order to select eligible CPs based on the SP’s goals and requirements, and split the VNRs into segments. In the second phase, which corresponds to the well known VNE within a single CP largely addressed in past research works, each selected CP uses a mapping approach to embed the assigned VNR segments into its intra-cloud network

    Analysis of Application Delivery Platform for Software Defined Infrastructures

    Get PDF
    Application Service Providers (ASPs) obtaining resources from multiple clouds have to contend with different management and control platforms employed by the cloud service providers (CSPs) and network service providers (NSP). Distributing applications on multiple clouds has a number of benefits but the absence of a common multi-cloud management platform that would allow ASPs dynamic and real-time control over resources across multiple clouds and interconnecting networks makes this task arduous. OpenADN, being developed at Washington University in Saint Louis, fills this gap. However, performance issues of such a complex, distributed and multi-threaded platform, not tackled appropriately, may neutralize some of the gains accruable to the ASPs. In this paper, we establish the need for and methods of collecting precise and fine-grained behavioral data of OpenADN like platforms that can be used to optimize their behavior in order to control operational cost, performance (e.g., latency) and energy consumption.Comment: E-preprin

    Optimal Selection Techniques for Cloud Service Providers

    Get PDF
    Nowadays Cloud computing permeates almost every domain in Information and Communications Technology (ICT) and, increasingly, most of the action is shifting from large, dominant players toward independent, heterogeneous, private/hybrid deployments, in line with an ever wider range of business models and stakeholders. The rapid growth in the numbers and diversity of small and medium Cloud providers is bringing new challenges in the as-a-Services space. Indeed, significant hurdles for smaller Cloud service providers in being competitive with the incumbent market leaders induce some innovative players to "federate" deployments in order to pool a larger, virtually limitless, set of resources across the federation, and stand to gain in terms of economies of scale and resource usage efficiency. Several are the challenges that need to be addressed in building and managing a federated environment, that may go under the "Security", "Interoperability", "Versatility", "Automatic Selection" and "Scalability" labels. The aim of this paper is to present a survey about the approaches and challenges belonging to the "Automatic Selection" category. This work provides a literature review of different approaches adopted in the "Automatic and Optimal Cloud Service Provider Selection", also covering "Federated and Multi-Cloud" environments

    Modern computing: Vision and challenges

    Get PDF
    Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technological advancements, computer systems, far from being static, have been continuously evolving and adapting to cover multifaceted societal niches. This has led to new paradigms such as cloud, fog, edge computing, and the Internet of Things (IoT), which offer fresh economic and creative opportunities. Nevertheless, this rapid change poses complex research challenges, especially in maximizing potential and enhancing functionality. As such, to maintain an economical level of performance that meets ever-tighter requirements, one must understand the drivers of new model emergence and expansion, and how contemporary challenges differ from past ones. To that end, this article investigates and assesses the factors influencing the evolution of computing systems, covering established systems and architectures as well as newer developments, such as serverless computing, quantum computing, and on-device AI on edge devices. Trends emerge when one traces technological trajectory, which includes the rapid obsolescence of frameworks due to business and technical constraints, a move towards specialized systems and models, and varying approaches to centralized and decentralized control. This comprehensive review of modern computing systems looks ahead to the future of research in the field, highlighting key challenges and emerging trends, and underscoring their importance in cost-effectively driving technological progress

    Modern computing: vision and challenges

    Get PDF
    Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technological advancements, computer systems, far from being static, have been continuously evolving and adapting to cover multifaceted societal niches. This has led to new paradigms such as cloud, fog, edge computing, and the Internet of Things (IoT), which offer fresh economic and creative opportunities. Nevertheless, this rapid change poses complex research challenges, especially in maximizing potential and enhancing functionality. As such, to maintain an economical level of performance that meets ever-tighter requirements, one must understand the drivers of new model emergence and expansion, and how contemporary challenges differ from past ones. To that end, this article investigates and assesses the factors influencing the evolution of computing systems, covering established systems and architectures as well as newer developments, such as serverless computing, quantum computing, and on-device AI on edge devices. Trends emerge when one traces technological trajectory, which includes the rapid obsolescence of frameworks due to business and technical constraints, a move towards specialized systems and models, and varying approaches to centralized and decentralized control. This comprehensive review of modern computing systems looks ahead to the future of research in the field, highlighting key challenges and emerging trends, and underscoring their importance in cost-effectively driving technological progress

    GEOBIA 2016 : Solutions and Synergies., 14-16 September 2016, University of Twente Faculty of Geo-Information and Earth Observation (ITC): open access e-book

    Get PDF

    A Service-based Joint Model Used for Distributed Learning: Application for Smart Agriculture

    Get PDF
    Distributed analytics facilitate to make the data-driven services smarter for a wider range of applications in many domains, including agriculture. The key to producing services at such level is timely analysis for deriving insights from reliable data. Centralized data analytic services are becoming infeasible due to limitations in the Information and Communication Technologies (ICT) infrastructure, timeliness of the information, and data ownership. Distributed Machine Learning (DML) platforms facilitate efficient data analysis and overcome such limitations effectively. Federated Learning (FL) is a DML methodology, which enables optimizing resource consumption while performing privacy-preserved timely analytics. In order to create such services through FL, there need to be innovative machine learning (ML) models as data complexity as well as application requirements limit the applicability of existing ML models. Even though NN-based models are highly advantageous, use of NN in FL settings is limited with thin clients (with less computational capabilities) and high-dimensional data (with a large number of model parameters). Therefore, in this paper, we propose a novel Neural Network (NN)- and Partial Least Square (PLS) regression-based joint FL model (FL-NNPLS). Its predictive performance is evaluated under sequentially and parallel-updating based FL algorithms in a smart farming context for milk quality analysis. Smart farming is a fast-growing industrial sector which requires effective analytics platforms to enable sustainable farming practices. However, the use of advanced ML techniques is still at an early stage for improving the effectiveness of farming practices. Our FL-NNPLS approach performs and compares well with a centralized approach and demonstrates state-of-the-art performance
    corecore