512 research outputs found

    Optimal VDC service provisioning in optically interconnected disaggregated data centers

    Get PDF
    ©2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Virtual data center (VDC) is a key service in modern data center (DC) infrastructures. However, the rigid architecture of traditional servers inside DCs may lead to blocking situations when deploying VDC instances. To overcome this problem, the disaggregated DC paradigm is introduced. In this letter, we present an integer linear programming (ILP) formulation to optimally allocate VDC requests on top of an optically interconnected disaggregated DC infrastructure, aiming to quantify the benefits that such an architecture can bring when compared with traditional server-centric DCs. Moreover, a lightweight simulated annealing-based heuristic is provided for the scenarios where the ILP scalability is challenged. The obtained numerical results reveal the substantial benefits yielded by the resource disaggregation paradigm.Peer ReviewedPostprint (author's final draft

    Software-Defined Cloud Computing: Architectural Elements and Open Challenges

    Full text link
    The variety of existing cloud services creates a challenge for service providers to enforce reasonable Software Level Agreements (SLA) stating the Quality of Service (QoS) and penalties in case QoS is not achieved. To avoid such penalties at the same time that the infrastructure operates with minimum energy and resource wastage, constant monitoring and adaptation of the infrastructure is needed. We refer to Software-Defined Cloud Computing, or simply Software-Defined Clouds (SDC), as an approach for automating the process of optimal cloud configuration by extending virtualization concept to all resources in a data center. An SDC enables easy reconfiguration and adaptation of physical resources in a cloud infrastructure, to better accommodate the demand on QoS through a software that can describe and manage various aspects comprising the cloud environment. In this paper, we present an architecture for SDCs on data centers with emphasis on mobile cloud applications. We present an evaluation, showcasing the potential of SDC in two use cases-QoS-aware bandwidth allocation and bandwidth-aware, energy-efficient VM placement-and discuss the research challenges and opportunities in this emerging area.Comment: Keynote Paper, 3rd International Conference on Advances in Computing, Communications and Informatics (ICACCI 2014), September 24-27, 2014, Delhi, Indi

    Software‐Defined Optical Networking (SDON): Principles and Applications

    Get PDF
    Featured by the advantages of high capacity, long transmission distance, and low energy consumption, optical network has been deployed widely as the most important infrastructure for backbone transport network. With the development of Internet, datacenter has become the popular infrastructure for cloud computing, which needs to be connected with high bitrate transport network to support heterogeneous applications. In this case, optical network also becomes a promising option for intra and inter‐datacenter networking. In the networking field, software‐defined networking (SDN) has gained a lot of attention from both academic and industry, and it aims to provide a flexible and programmable control plane. SDN is applicable to optical network, and the optical network integrated with SDN, namely software‐defined optical network (SDON), are expected as the future transport solutions, which can provide both high bitrate connectivity and flexible network applications. The principles and applications of SDON are introduced in this chapter

    Impact of Processing-Resource Sharing on the Placement of Chained Virtual Network Functions

    Full text link
    Network Function Virtualization (NFV) provides higher flexibility for network operators and reduces the complexity in network service deployment. Using NFV, Virtual Network Functions (VNF) can be located in various network nodes and chained together in a Service Function Chain (SFC) to provide a specific service. Consolidating multiple VNFs in a smaller number of locations would allow decreasing capital expenditures. However, excessive consolidation of VNFs might cause additional latency penalties due to processing-resource sharing, and this is undesirable, as SFCs are bounded by service-specific latency requirements. In this paper, we identify two different types of penalties (referred as "costs") related to the processingresource sharing among multiple VNFs: the context switching costs and the upscaling costs. Context switching costs arise when multiple CPU processes (e.g., supporting different VNFs) share the same CPU and thus repeated loading/saving of their context is required. Upscaling costs are incurred by VNFs requiring multi-core implementations, since they suffer a penalty due to the load-balancing needs among CPU cores. These costs affect how the chained VNFs are placed in the network to meet the performance requirement of the SFCs. We evaluate their impact while considering SFCs with different bandwidth and latency requirements in a scenario of VNF consolidation.Comment: Accepted for publication in IEEE Transactions on Cloud Computin

    Integrated IT and SDN Orchestration of multi-domain multi-layer transport networks

    Get PDF
    Telecom operators networks' management and control remains partitioned by technology, equipment supplier and networking layer. In some segments, the network operations are highly costly due to the need of the individual, and even manual, configuration of the network equipment by highly specialized personnel. In multi-vendor networks, expensive and never ending integration processes between Network Management Systems (NMSs) and the rest of systems (OSSs, BSSs) is a common situation, due to lack of adoption of standard interfaces in the management systems of the different equipment suppliers. Moreover, the increasing impact of the new traffic flows introduced by the deployment of massive Data Centers (DCs) is also imposing new challenges that traditional networking is not ready to overcome. The Fifth Generation of Mobile Technology (5G) is also introducing stringent network requirements such as the need of connecting to the network billions of new devices in IoT paradigm, new ultra-low latency applications (i.e., remote surgery) and vehicular communications. All these new services, together with enhanced broadband network access, are supposed to be delivered over the same network infrastructure. In this PhD Thesis, an holistic view of Network and Cloud Computing resources, based on the recent innovations introduced by Software Defined Networking (SDN), is proposed as the solution for designing an end-to-end multi-layer, multi-technology and multi-domain cloud and transport network management architecture, capable to offer end-to-end services from the DC networks to customers access networks and the virtualization of network resources, allowing new ways of slicing the network resources for the forthcoming 5G deployments. The first contribution of this PhD Thesis deals with the design and validation of SDN based network orchestration architectures capable to improve the current solutions for the management and control of multi-layer, multi-domain backbone transport networks. These problems have been assessed and progressively solved by different control and management architectures which has been designed and evaluated in real evaluation environments. One of the major findings of this work has been the need of developed a common information model for transport network's management, capable to describe the resources and services of multilayer networks. In this line, the Control Orchestration Protocol (COP) has been proposed as a first contriution towards an standard management interface based on the main principles driven by SDN. Furthermore, this PhD Thesis introduces a novel architecture capable to coordinate the management of IT computing resources together with inter- and intra-DC networks. The provisioning and migration of virtual machines together with the dynamic reconfiguration of the network has been successfully demonstrated in a feasible timescale. Moreover, a resource optimization engine is introduced in the architecture to introduce optimization algorithms capable to solve allocation problems such the optimal deployment of Virtual Machine Graphs over different DCs locations minimizing the inter-DC network resources allocation. A baseline blocking probability results over different network loads are also presented. The third major contribution is the result of the previous two. With a converged cloud and network infrastructure controlled and operated jointly, the holistic view of the network allows the on-demand provisioning of network slices consisting of dedicated network and cloud resources over a distributed DC infrastructure interconnected by an optical transport network. The last chapters of this thesis discuss the management and orchestration of 5G slices based over the control and management components designed in the previous chapters. The design of one of the first network slicing architectures and the deployment of a 5G network slice in a real Testbed, is one of the major contributions of this PhD Thesis.La gestión y el control de las redes de los operadores de red (Telcos), todavía hoy, está segmentado por tecnología, por proveedor de equipamiento y por capa de red. En algunos segmentos (por ejemplo en IP) la operación de la red es tremendamente costosa, ya que en muchos casos aún se requiere con guración individual, e incluso manual, de los equipos por parte de personal altamente especializado. En redes con múltiples proveedores, los procesos de integración entre los sistemas de gestión de red (NMS) y el resto de sistemas (p. ej., OSS/BSS) son habitualmente largos y extremadamente costosos debido a la falta de adopción de interfaces estándar por parte de los diferentes proveedores de red. Además, el impacto creciente en las redes de transporte de los nuevos flujos de tráfico introducidos por el despliegue masivo de Data Centers (DC), introduce nuevos desafíos que las arquitecturas de gestión y control de las redes tradicionales no están preparadas para afrontar. La quinta generación de tecnología móvil (5G) introduce nuevos requisitos de red, como la necesidad de conectar a la red billones de dispositivos nuevos (Internet de las cosas - IoT), aplicaciones de ultra baja latencia (p. ej., cirugía a distancia) y las comunicaciones vehiculares. Todos estos servicios, junto con un acceso mejorado a la red de banda ancha, deberán ser proporcionados a través de la misma infraestructura de red. Esta tesis doctoral propone una visión holística de los recursos de red y cloud, basada en los principios introducidos por Software Defined Networking (SDN), como la solución para el diseño de una arquitectura de gestión extremo a extremo (E2E) para escenarios de red multi-capa y multi-dominio, capaz de ofrecer servicios de E2E, desde las redes intra-DC hasta las redes de acceso, y ofrecer ademas virtualización de los recursos de la red, permitiendo nuevas formas de segmentación en las redes de transporte y la infrastructura de cloud, para los próximos despliegues de 5G. La primera contribución de esta tesis consiste en la validación de arquitecturas de orquestración de red, basadas en SDN, para la gestión y control de redes de transporte troncales multi-dominio y multi-capa. Estos problemas (gestion de redes multi-capa y multi-dominio), han sido evaluados de manera incremental, mediante el diseño y la evaluación experimental, en entornos de pruebas reales, de diferentes arquitecturas de control y gestión. Uno de los principales hallazgos de este trabajo ha sido la necesidad de un modelo de información común para las interfaces de gestión entre entidades de control SDN. En esta línea, el Protocolo de Control Orchestration (COP) ha sido propuesto como interfaz de gestión de red estándar para redes SDN de transporte multi-capa. Además, en esta tesis presentamos una arquitectura capaz de coordinar la gestión de los recursos IT y red. La provisión y la migración de máquinas virtuales junto con la reconfiguración dinámica de la red, han sido demostradas con éxito en una escala de tiempo factible. Además, la arquitectura incorpora una plataforma para la ejecución de algoritmos de optimización de recursos capaces de resolver diferentes problemas de asignación, como el despliegue óptimo de Grafos de Máquinas Virtuales (VMG) en diferentes DCs que minimizan la asignación de recursos de red. Esta tesis propone una solución para este problema, que ha sido evaluada en terminos de probabilidad de bloqueo para diferentes cargas de red. La tercera contribución es el resultado de las dos anteriores. La arquitectura integrada de red y cloud presentada permite la creación bajo demanda de "network slices", que consisten en sub-conjuntos de recursos de red y cloud dedicados para diferentes clientes sobre una infraestructura común. El diseño de una de las primeras arquitecturas de "network slicing" y el despliegue de un "slice" de red 5G totalmente operativo en un Testbed real, es una de las principales contribuciones de esta tesis.La gestió i el control de les xarxes dels operadors de telecomunicacions (Telcos), encara avui, està segmentat per tecnologia, per proveïdors d’equipament i per capes de xarxa. En alguns segments (Per exemple en IP) l’operació de la xarxa és tremendament costosa, ja que en molts casos encara es requereix de configuració individual, i fins i tot manual, dels equips per part de personal altament especialitzat. En xarxes amb múltiples proveïdors, els processos d’integració entre els Sistemes de gestió de xarxa (NMS) i la resta de sistemes (per exemple, Sistemes de suport d’operacions - OSS i Sistemes de suport de negocis - BSS) són habitualment interminables i extremadament costosos a causa de la falta d’adopció d’interfícies estàndard per part dels diferents proveïdors de xarxa. A més, l’impacte creixent en les xarxes de transport dels nous fluxos de trànsit introduïts pel desplegament massius de Data Centers (DC), introdueix nous desafiaments que les arquitectures de gestió i control de les xarxes tradicionals que no estan llestes per afrontar. Per acabar de descriure el context, la cinquena generació de tecnologia mòbil (5G) també presenta nous requisits de xarxa altament exigents, com la necessitat de connectar a la xarxa milers de milions de dispositius nous, dins el context de l’Internet de les coses (IOT), o les noves aplicacions d’ultra baixa latència (com ara la cirurgia a distància) i les comunicacions vehiculars. Se suposa que tots aquests nous serveis, juntament amb l’accés millorat a la xarxa de banda ampla, es lliuraran a través de la mateixa infraestructura de xarxa. Aquesta tesi doctoral proposa una visió holística dels recursos de xarxa i cloud, basada en els principis introduïts per Software Defined Networking (SDN), com la solució per al disseny de una arquitectura de gestió extrem a extrem per a escenaris de xarxa multi-capa, multi-domini i consistents en múltiples tecnologies de transport. Aquesta arquitectura de gestió i control de xarxes transport i recursos IT, ha de ser capaç d’oferir serveis d’extrem a extrem, des de les xarxes intra-DC fins a les xarxes d’accés dels clients i oferir a més virtualització dels recursos de la xarxa, obrint la porta a noves formes de segmentació a les xarxes de transport i la infrastructura de cloud, pels propers desplegaments de 5G. La primera contribució d’aquesta tesi doctoral consisteix en la validació de diferents arquitectures d’orquestració de xarxa basades en SDN capaces de millorar les solucions existents per a la gestió i control de xarxes de transport troncals multi-domini i multicapa. Aquests problemes (gestió de xarxes multicapa i multi-domini), han estat avaluats de manera incremental, mitjançant el disseny i l’avaluació experimental, en entorns de proves reals, de diferents arquitectures de control i gestió. Un dels principals troballes d’aquest treball ha estat la necessitat de dissenyar un model d’informació comú per a les interfícies de gestió de xarxes, capaç de descriure els recursos i serveis de la xarxes transport multicapa. En aquesta línia, el Protocol de Control Orchestration (COP, en les seves sigles en anglès) ha estat proposat en aquesta Tesi, com una primera contribució cap a una interfície de gestió de xarxa estàndard basada en els principis bàsics de SDN. A més, en aquesta tesi presentem una arquitectura innovadora capaç de coordinar la gestió de els recursos IT juntament amb les xarxes inter i intra-DC. L’aprovisionament i la migració de màquines virtuals juntament amb la reconfiguració dinàmica de la xarxa, ha estat demostrat amb èxit en una escala de temps factible. A més, l’arquitectura incorpora una plataforma per a l’execució d’algorismes d’optimització de recursos, capaços de resoldre diferents problemes d’assignació, com el desplegament òptim de Grafs de Màquines Virtuals (VMG) en diferents ubicacions de DC que minimitzen la assignació de recursos de xarxa entre DC. També es presenta una solució bàsica per a aquest problema, així com els resultats de probabilitat de bloqueig per a diferents càrregues de xarxa. La tercera contribució principal és el resultat dels dos anteriors. Amb una infraestructura de xarxa i cloud convergent, controlada i operada de manera conjunta, la visió holística de la xarxa permet l’aprovisionament sota demanda de "network slices" que consisteixen en subconjunts de recursos d’xarxa i cloud, dedicats per a diferents clients, sobre una infraestructura de Data Centers distribuïda i interconnectada per una xarxa de transport òptica. Els últims capítols d’aquesta tesi tracten sobre la gestió i organització de "network slices" per a xarxes 5G en funció dels components de control i administració dissenyats i desenvolupats en els capítols anteriors. El disseny d’una de les primeres arquitectures de "network slicing" i el desplegament d’un "slice" de xarxa 5G totalment operatiu en un Testbed real, és una de les principals contribucions d’aquesta tesi.Postprint (published version

    Cost Optimization and Load Balancing of Intra and Inter Data Center Networks to Facilitate Cloud Services

    Get PDF
    Title from PDF of title page viewed January 3, 2019Dissertation advisor: Deep MedhiVitaIncludes bibliographical references (pages 127-137}Thesis (Ph.D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2018For cloud enterprise customers that require services on demand, data centers (DC) must allocate and partition data center resources in a dynamic fashion. We consider the problem of allocating data center resources for cloud enterprise customers who require guaranteed services on demand. In particular, a request from an enterprise customer is mapped to a virtual network (VN) class that is allocated both bandwidth and compute resources by connecting it from an entry point of a data center to one or more hosts while there are multiple geographically distributed data centers to choose from. We take a dynamic traffic engineering approach over multiple time periods in which an energy aware resource reservation model is solved at each review point. In this dissertation, at first for the energy-aware resource reservation problem, we present a mixed-integer linear programming (MILP) formulation (for small-scale problems) and a heuristic approach (for large-scale problems). Our heuristic is fast for solving large-scale problems where the MILP problem becomes difficult to solve. Through a comprehensive set of studies, we found that a VN class with a low resource requirement has a low blocking even in heavy traffic, while the VN class with a high resource requirement faces a high service denial. Furthermore, the VN class having randomly distributed resource requirement has a high provisioning cost and blocking compared to the VN class having the same resource requirement for each request although the average resource requirement is same for both these VN classes. We also observe that our approach reduces the maximum energy consumption by about one-sixth at the low arrival rate to by about one-third at the highest arrival rate which also depends on how many different CPU frequency levels a server can run at. Allocation of resources in data centers needs to be done in a dynamic fashion for cloud enterprise customers who require virtualized reservation-oriented services on demand. Due to the spatial diversity of data centers, the cost of using different DCs also varies. In this dissertation, we then propose an allocation scheme to balance the load among these DCs with different cost to minimize the total provisioning cost in a dynamic environment while ensuring that the service level agreements (SLAs) are met. Compared to a benchmark scheme (where all requests are first sent to the cheapest data center), our scheme can decrease the proportional utilization from 24% (for heavy load) to 30% (for normal load) and achieve a significant balance in the cost incurred by individual DCs. Our scheme can also achieve 7.5% reduction in total provisioning cost under certain service level agreement (SLA) in exchange of low increment in blocking. Furthermore, we tested our scheme on 5 DCs to show that our allocation schemes follows the weighted cost proportionally. With the increasing dependency of cloud-based services, data centers have be come a popular platform to satisfy customers’ requests. Many large network providers now have their own geographically distributed DCs for cloud services, or have partner ships with third party DC providers to route customers’ demand. When end customers’ re quests arrive at a Point-of-Presence (PoP) of a large Internet Service Provider, the provider having DCs in multiple geo-locations needs to decide which DC should serve the request depending on the geo-distance, cost of resources in that DC, availability of the requested resource at that DC, and congestion in the path from the customers’ location to that DC. Therefore, an optimal connectivity scheme from the ingress PoP to egress DC is required among the PoPs and DCs to minimize the cost of establishing paths between a PoP and a DC while ensuring load balancing in both the link level and DC level. Considering these, we also present a novel mix-integer linear programming (MILP) model for this problem. We show the efficacy of our model through various performance metrics such as average and maximum link utilization, and average number of links used per path.Introduction -- Literature review -- Model and heuristic for intra DC cost optimization -- Simulation setup and result analysis for intra DC cost optimization -- Load balancing in geo-distributed data centers -- Optimal connectivity between inter DC networks -- Conclusion and future research -- Appendix A. Intra DC optimization model in AMPL -- Appendix B. Optimal connectivity to inter DC network model in AMP
    corecore