10 research outputs found

    Ultra-reliable Low-latency, Energy-efficient and Computing-centric Software Data Plane for Network Softwarization

    Get PDF
    Network softwarization plays a significantly important role in the development and deployment of the latest communication system for 5G and beyond. A more flexible and intelligent network architecture can be enabled to provide support for agile network management, rapid launch of innovative network services with much reduction in Capital Expense (CAPEX) and Operating Expense (OPEX). Despite these benefits, 5G system also raises unprecedented challenges as emerging machine-to-machine and human-to-machine communication use cases require Ultra-Reliable Low Latency Communication (URLLC). According to empirical measurements performed by the author of this dissertation on a practical testbed, State of the Art (STOA) technologies and systems are not able to achieve the one millisecond end-to-end latency requirement of the 5G standard on Commercial Off-The-Shelf (COTS) servers. This dissertation performs a comprehensive introduction to three innovative approaches that can be used to improve different aspects of the current software-driven network data plane. All three approaches are carefully designed, professionally implemented and rigorously evaluated. According to the measurement results, these novel approaches put forward the research in the design and implementation of ultra-reliable low-latency, energy-efficient and computing-first software data plane for 5G communication system and beyond

    Sisäkkäiset virtuaaliympäristöt

    Get PDF
    Virtual Machines have been a common computation platform in areas of cloud computing for some time now. VMs offer a decent amount of isolation for security and system resources, and from application perspective they behave much like native environments. Software containers are gaining popularity, as a new application delivery technology. Just like VMs, applications started inside containers are running in isolated environments but without the performance overhead caused by virtualization of system resources. This makes containers seem like a more effient option for VMs. In this thesis, different combinations of containers and VMs are benchmarked. For each benchmark, host environment is also measured, to understand the overhead caused by the underlying virtuel environment technology. Benchmarks used include storage and network access benchmarks, and also an application benchmark of compiling Linux kernel. As another part of the thesis, a CPU intensive workload is run on the virtualization host server. Then the benchmarks are repeated, in order to determine how much the given workload effects the benchmark score, and also if this effect can be observed from the virtualization guest side by measuring CPU steal time. Results show that containers are slightly slower in the application benchmark than the host. The main difference is expected to come from the way docker handles storage accesses. With default network configuration, the container is losing in terms of performance to the host. In every benchmark we did, VMs always lost to host and containers in performance.Virtuaalikoneista on tullut yleinen laskenta-alusta pilvitietokoneille. Ne eristävät virtuaaliympäristön muista palveluista samalla fyysisellä koneella ja sovellusten näkökulmasta ne toimivat lähes samalla tavalla kuin natiivit ympäristöt. Ohjelmistokontit ovat nousseet suosioon tehokkaana sovellusten toimitusteknologiana. Molemmat, sekä virtuaalikoneet, että ohjelmistokontit tarjoavat niiden sisällä suoritettaville sovelluksille eristetyn virtuaaliympäristön. Ohjelmistokontit eivät pyri virtualisoimaan kaikkia järjestelmän resursseja vaan käyttävät alla olevaa käyttöjärjestelmän ydintä hyväkseen. Tämä tekee ohjelmistokonteista houkuttelevan vaihtoehdon virtuaalikoneille. Tässä diplomityössä suoritettiin erilaisia suorituskykymittauksia ohjelmistokonttien ja virtuaalikoneiden avulla luoduissa ympäristöissä. Myös alla olevan isäntäkoneen natiivisuorituskyky mitattiin, josta saatiin hyvä arvo erilaisten virtuaaliympäristöjen vertailuun. Mittasimme pysyvän muistin, verkon ja sovelluksen suorituskyvyn. Sovelluksena toimi Linuxin kääntäminen lähdekoodista toimivaksi käyttöjärjestelmäksi. Tuloksemme osoittavat, että sovellussuorituskykytestissä kontit häviävät natiivijärjestelmän suorituskyvylle vain vähän. Eron oletetaan johtuvan tavasta, jolla valitsemamme konttiteknologia hoitaa pysyvän muistin lukemisen ja kirjoittamisen. Oletusverkkoasetuksilla, kontit hävisivät natiivijärjestelmälle myös. Kaikissa tekemissämme suorituskykymittauksissa virtuaalikoneet hävisivät natiivijärjestelmälle sekä ohjelmistokonteille

    Infrastructure sharing of 5G mobile core networks on an SDN/NFV platform

    Get PDF
    When looking towards the deployment of 5G network architectures, mobile network operators will continue to face many challenges. The number of customers is approaching maximum market penetration, the number of devices per customer is increasing, and the number of non-human operated devices estimated to approach towards the tens of billions, network operators have a formidable task ahead of them. The proliferation of cloud computing techniques has created a multitude of applications for network services deployments, and at the forefront is the adoption of Software-Defined Networking (SDN) and Network Functions Virtualisation (NFV). Mobile network operators (MNO) have the opportunity to leverage these technologies so that they can enable the delivery of traditional networking functionality in cloud environments. The benefit of this is reductions seen in the capital and operational expenditures of network infrastructure. When going for NFV, how a Virtualised Network Function (VNF) is designed, implemented, and placed over physical infrastructure can play a vital role on the performance metrics achieved by the network function. Not paying careful attention to this aspect could lead to the drastically reduced performance of network functions thus defeating the purpose of going for virtualisation solutions. The success of mobile network operators in the 5G arena will depend heavily on their ability to shift from their old operational models and embrace new technologies, design principles and innovation in both the business and technical aspects of the environment. The primary goal of this thesis is to design, implement and evaluate the viability of data centre and cloud network infrastructure sharing use case. More specifically, the core question addressed by this thesis is how virtualisation of network functions in a shared infrastructure environment can be achieved without adverse performance degradation. 5G should be operational with high penetration beyond the year 2020 with data traffic rates increasing exponentially and the number of connected devices expected to surpass tens of billions. Requirements for 5G mobile networks include higher flexibility, scalability, cost effectiveness and energy efficiency. Towards these goals, Software Defined Networking (SDN) and Network Functions Virtualisation have been adopted in recent proposals for future mobile networks architectures because they are considered critical technologies for 5G. A Shared Infrastructure Management Framework was designed and implemented for this purpose. This framework was further enhanced for performance optimisation of network functions and underlying physical infrastructure. The objective achieved was the identification of requirements for the design and development of an experimental testbed for future 5G mobile networks. This testbed deploys high performance virtualised network functions (VNFs) while catering for the infrastructure sharing use case of multiple network operators. The management and orchestration of the VNFs allow for automation, scalability, fault recovery, and security to be evaluated. The testbed developed is readily re-creatable and based on open-source software

    NFV Platforms: Taxonomy, Design Choices and Future Challenges

    Get PDF
    Due to the intrinsically inefficient service provisioning in traditional networks, Network Function Virtualization (NFV) keeps gaining attention from both industry and academia. By replacing the purpose-built, expensive, proprietary network equipment with software network functions consolidated on commodity hardware, NFV envisions a shift towards a more agile and open service provisioning paradigm. During the last few years, a large number of NFV platforms have been implemented in production environments that typically face critical challenges, including the development, deployment, and management of Virtual Network Functions (VNFs). Nonetheless, just like any complex system, such platforms commonly consist of abounding software and hardware components and usually incorporate disparate design choices based on distinct motivations or use cases. This broad collection of convoluted alternatives makes it extremely arduous for network operators to make proper choices. Although numerous efforts have been devoted to investigating different aspects of NFV, none of them specifically focused on NFV platforms or attempted to explore their design space. In this paper, we present a comprehensive survey on the NFV platform design. Our study solely targets existing NFV platform implementations. We begin with a top-down architectural view of the standard reference NFV platform and present our taxonomy of existing NFV platforms based on what features they provide in terms of a typical network function life cycle. Then we thoroughly explore the design space and elaborate on the implementation choices each platform opts for. We also envision future challenges for NFV platform design in the incoming 5G era. We believe that our study gives a detailed guideline for network operators or service providers to choose the most appropriate NFV platform based on their respective requirements. Our work also provides guidelines for implementing new NFV platforms

    Cloud-efficient modelling and simulation of magnetic nano materials

    Get PDF
    Scientific simulations are rarely attempted in a cloud due to the substantial performance costs of virtualization. Considerable communication overheads, intolerable latencies, and inefficient hardware emulation are the main reasons why this emerging technology has not been fully exploited. On the other hand, the progress of computing infrastructure nowadays is strongly dependent on perspective storage medium development, where efficient micromagnetic simulations play a vital role in future memory design. This thesis addresses both these topics by merging micromagnetic simulations with the latest OpenStack cloud implementation while providing a time and costeffective alternative to expensive computing centers. However, many challenges have to be addressed before a high-performance cloud platform emerges as a solution for problems in micromagnetic research communities. First, the best solver candidate has to be selected and further improved, particularly in the parallelization and process communication domain. Second, a 3-level cloud communication hierarchy needs to be recognized and each segment adequately addressed. The required steps include breaking the VMisolation for the host’s shared memory activation, cloud network-stack tuning, optimization, and efficient communication hardware integration. The project work concludes with practical measurements and confirmation of successfully implemented simulation into an open-source cloud environment. It is achieved that the renewed Magpar solver runs for the first time in the OpenStack cloud by using ivshmem for shared memory communication. Also, extensive measurements proved the effectiveness of our solutions, yielding from sixty percent to over ten times better results than those achieved in the standard cloud.Aufgrund der erheblichen Leistungskosten der Virtualisierung werden wissenschaftliche Simulationen in einer Cloud selten versucht. Beträchtlicher Kommunikationsaufwand, erhebliche Latenzen und ineffiziente Hardwareemulation sind die Hauptgründe, warum diese aufkommende Technologie nicht vollständig genutzt wurde. Andererseits hängt der Fortschritt der Computertechnologie heutzutage stark von der Entwicklung perspektivischer Speichermedien ab, bei denen effiziente mikromagnetische Simulationen eine wichtige Rolle für die zukünftige Speichertechnologie spielen. Diese Arbeit befasst sich mit diesen beiden Themen, indem mikromagnetische Simulationen mit der neuesten OpenStack Cloud-Implementierung zusammengeführt werden, um eine zeit- und kostengünstige Alternative zu teuren Rechenzentren bereitzustellen. Viele Herausforderungen müssen jedoch angegangen werden, bevor eine leistungsstarke Cloud-Plattform als Lösung für Probleme in mikromagnetischen Forschungsgemeinschaften entsteht. Zunächst muss der beste Kandidat für die Lösung ausgewählt und weiter verbessert werden, insbesondere im Bereich der Parallelisierung und Prozesskommunikation. Zweitens muss eine 3-stufige CloudKommunikationshierarchie erkannt und jedes Segment angemessen adressiert werden. Die erforderlichen Schritte umfassen das Aufheben der VM-Isolation, um den gemeinsam genutzten Speicher zwischen Cloud-Instanzen zu aktivieren, die Optimierung des Cloud-Netzwerkstapels und die effiziente Integration von Kommunikationshardware. Die praktische Arbeit endet mit Messungen und der Bestätigung einer erfolgreich implementierten Simulation in einer Open-Source Cloud-Umgebung. Als Ergebnis haben wir erreicht, dass der neu erstellte Magpar-Solver zum ersten Mal in der OpenStack Cloud ausgeführt wird, indem ivshmem für die Shared-Memory Kommunikation verwendet wird. Umfangreiche Messungen haben auch die Wirksamkeit unserer Lösungen bewiesen und von sechzig Prozent bis zu zehnmal besseren Ergebnissen als in der Standard Cloud geführt

    Analyse de performance des plateformes infonuagiques

    Get PDF
    RÉSUMÉ L’usage des services infonuagiques connait un essor indéniable au sein des entreprises ces dernières années. Ils exposent, au travers d’Internet, un ensemble de technologies qui donnent accès à des ressources informatiques. Ces technologies permettent de virtualiser les machines physiques, et de fournir aux utilisateurs des ressources virtuelles isolées les unes des autres. Si ce mécanisme d’isolement offre une forme de garantie pour la sécurité des données, il peut cependant engendrer des anomalies de performance. En effet, les systèmes virtuels ont l’illusion d’avoir un accès exclusif à l’hôte et chacun l’utilise sans tenir compte des besoins des autres. Cela entraine des interférences et une baisse de performance des technologies de virtualisation. Pour superviser les services d’une plateforme infonuagique, il est fréquent d’utiliser des applications de gestion. Ces applications ont la particularité de simplifier les interactions des utilisateurs avec les services de l’infrastructure. Cependant, elles peuvent être une source d’incohérence et causer des anomalies d’exécution des requêtes lorsqu’elles sont mal configurées. Nous nous intéressons, ici, aux troubles liés à l’utilisation d’OpenStack comme application de gestion. L’objectif de cette étude est de fournir aux administrateurs, un outil pour surveiller le fonctionnement des opérations dans le nuage et pour localiser d’éventuelles altérations de performance, aussi bien au niveau des couches applications que des couches de technologies de virtualisation. Nous basons notre approche sur le traçage, qui permet de comprendre efficacement les détails d’exécution d’une application. En traçant simultanément les différentes couches de l’infrastructure, il est possible de suivre les requêtes des utilisateurs et de déterminer, de façon précise, les performances des services déployés. Nous utilisons LTTng, un outil de traçage reconnu pour les latences relativement faibles qu’il induit aux programmes diagnostiqués, contrairement aux autres traceurs. Il sera utilisé pour tracer les espaces utilisateurs et noyaux des machines de la plateforme. Les traces des différents systèmes seront collectées et agrégées dans une machine dédiée à l’analyse des performances. L’administrateur pourra alors obtenir un bilan d’utilisation des ressources, détecter les anomalies de fonctionnement des services et par la suite prendre des mesures pour remédier aux problèmes.----------ABSTRACT Cloud computing usage has experienced a tremendous growth in companies over the past few years. It exposes, through the Internet, a set of technologies granting access to computing resources. These technologies virtualize physical machines to provide virtual resources which are isolated one from another. If this isolation mechanism is a guarantee for data security, it can cause a serious drop in performance. Indeed, virtual systems have the illusion of an exclusive access to the host’s resources and they use them without considering the needs of others. This causes some interferences and decreases the performance of guest environments. Some applications, known as cloud operating systems, are commonly used to supervise cloud computing platforms. These applications simplify the interactions of the users with the infrastructure. However, they can cause faulty executions when misconfigured. Here we will focus on issues related to the use of Openstack as a cloud management application. The objective of this study is to provide administrators with a tool to monitor cloud tasks and locate potential drops of performance in both application and virtualization layers. Our approach is based on tracing, to produce detailed information about service operations. By tracing the various layers of the infrastructure simultaneously, it is possible to follow user requests and accurately determine the performance of deployed services. We use LTTng, a high-performance tracer, with very low impact on system behavior when tracing is enabled. It will be used to investigate all the host user space and kernel space executions. The traces will be collected and aggregated into a dedicated system to perform the analysis. The administrator can then obtain a resource utilization report, and be able to identify service troubles and subsequently take action to correct the problem

    Redes de acesso definidas por software

    Get PDF
    Mestrado em Engenharia Electr onica e Telecomunica c~oesWith the increase of internet usage and the exponential growth of bandwidth consumption due to the increasing number of users of new generation equipments and the creation of new services that consume increasingly higher bandwidths, it's necessary to nd solutions to meet these new requirements. Passive optical networks (PONs) promise to solve these problems by providing a better service to users and providers. PON networks are very attractive since they don't depend on active elements between their end points, leading to lower maintenance costs and better operational e ciency. PON technologies addressed in this dissertation are the G-PON (Gigabit PON), currently standardized and implemented in access networks across the world, and the NG-PON2 (Next-Generation PON 2), which is the next step on access networks evolution and is currently on the process of study and standardization. The NG-PON2 must co-exist on the same optical distribution network of the G-PON, so it re-utilizes the already built infrastructures and consequently protect providers initial investment. Software De ned Networks (SDN) is an emerging architecture that decouples network control and forwarding functions from the hardware they belong, making possible for network control to be programmable, enabling the implementation of solutions capable of solving the increasing complexity of the networks problem and the creation of innovative services. The study main focus is the SDN as an enabling mechanism for network elements virtualization. In this dissertation is studied the G-PON and NG-PON2 architectures in the context of the ITU-T G.984.x and G.989.x recommendations respectively, and the study of the SDN technology through documentation available online. And based on the studies made it's going to be proposed a server architecture that enables the control of G-PON and NG-PON2 infrastructure elements, introducing virtualization SDN concepts on access networks.Com o crescimento da utiliza c~ao da Internet e o consumo de largura de banda a crescer exponencialmente devido ao crescente n umero de utilizadores de equipamentos de nova gera c~ao e a cria c~ao de novos servi cos que consomem cada vez maiores larguras de banda, e necess ario encontrar solu c~oes para satisfazer estes novos requisitos. As redes opticas passivas (PON) prometem solucionar esses problemas, oferecendo um melhor servi co aos utilizadores e provedores. As redes PON s~ao muito atrativas pois n~ao dependem de elementos ativos entre os seus pontos terminais, resultando em baixos custos de manuten c~ao e uma maior e ci^encia de opera c~oes. As tecnologias PON abordadas nesta disserta c~ao s~ao o G-PON (Gigabit PON), actualmente padronizada e implementada nas redes de accesso pelo mundo, e o NG-PON2 (Next-Generation PON 2), que ser a o pr oximo passo na evolu c~ao das redes de acesso e que atualmente se encontra em processo de estudo e padroniza c~ao. O NG-PON2 deve co-existir na mesma rede de distribui c~ao otica do G-PON, de forma a re-utilizar as infrastruturas j a construidas e consequentemente proteger o investimento inicial dos provedores. As redes de nidas por software (SDN) e uma arquitetura emergente que desassocia o controlo da rede e fun c~oes de encaminhamento do hardware a que pertencem, possibilitando a que o controlo da rede seja program avel, permitindo a implementa c~ao de solu c~oes capazes de resolver o problema do aumento da complexidade das redes e cria c~ao de servi cos inovadores. O principal foco de estudo ser a nas SDN como mecanismo de virtualiza c~ao dos elementos da rede. Nesta disserta c~ao e estudado as arquiteturas do G-PON e NG-PON2 no contexto das recomenda c~oes do ITU-T G.984.x e G.989.x respetivamente, e o estudo da tecnologia SDN atrav es da documenta c~ao dispon vel online. Com base nos estudos efetuados ir a ser sugerido uma arquitetura de um servidor que permite o controlo de elementos da infrastrutura G-PON e NG-PON2, intoduzindo os conceitos das SDN e virtualiza c~ao na rede de acesso

    Software Defined Application Delivery Networking

    Get PDF
    In this thesis we present the architecture, design, and prototype implementation details of AppFabric. AppFabric is a next generation application delivery platform for easily creating, managing and controlling massively distributed and very dynamic application deployments that may span multiple datacenters. Over the last few years, the need for more flexibility, finer control, and automatic management of large (and messy) datacenters has stimulated technologies for virtualizing the infrastructure components and placing them under software-based management and control; generically called Software-defined Infrastructure (SDI). However, current applications are not designed to leverage this dynamism and flexibility offered by SDI and they mostly depend on a mix of different techniques including manual configuration, specialized appliances (middleboxes), and (mostly) proprietary middleware solutions together with a team of extremely conscientious and talented system engineers to get their applications deployed and running. AppFabric, 1) automates the whole control and management stack of application deployment and delivery, 2) allows application architects to define logical workflows consisting of application servers, message-level middleboxes, packet-level middleboxes and network services (both, local and wide-area) composed over application-level routing policies, and 3) provides the abstraction of an application cloud that allows the application to dynamically (and automatically) expand and shrink its distributed footprint across multiple geographically distributed datacenters operated by different cloud providers. The architecture consists of a hierarchical control plane system called Lighthouse and a fully distributed data plane design (with no special hardware components such as service orchestrators, load balancers, message brokers, etc.) called OpenADN . The current implementation (under active development) consists of ~10000 lines of python and C code. AppFabric will allow applications to fully leverage the opportunities provided by modern virtualized Software-Defined Infrastructures. It will serve as the platform for deploying massively distributed, and extremely dynamic next generation application use-cases, including: Internet-of-Things/Cyber-Physical Systems: Through support for managing distributed gather-aggregate topologies common to most Internet-of-Things(IoT) and Cyber-Physical Systems(CPS) use-cases. By their very nature, IoT and CPS use cases are massively distributed and have different levels of computation and storage requirements at different locations. Also, they have variable latency requirements for their different distributed sites. Some services, such as device controllers, in an Iot/CPS application workflow may need to gather, process and forward data under near-real time constraints and hence need to be as close to the device as possible. Other services may need more computation to process aggregated data to drive long term business intelligence functions. AppFabric has been designed to provide support for such very dynamic, highly diversified and massively distributed application use-cases. Network Function Virtualization: Through support for heterogeneous workflows, application-aware networking, and network-aware application deployments, AppFabric will enable new partnerships between Application Service Providers (ASPs) and Network Service Providers (NSPs). An application workflow in AppFabric may comprise of application services, packet and message-level middleboxes, and network transport services chained together over an application-level routing substrate. The Application-level routing substrate allows policy-based service chaining where the application may specify policies for routing their application traffic over different services based on application-level content or context. Virtual worlds/multiplayer games: Through support for creating, managing and controlling dynamic and distributed application clouds needed by these applications. AppFabric allows the application to easily specify policies to dynamically grow and shrink the application\u27s footprint over different geographical sites, on-demand. Mobile Apps: Through support for extremely diversified and very dynamic application contexts typical of such applications. Also, AppFabric provides support for automatically managing massively distributed service deployment and controlling application traffic based on application-level policies. This allows mobile applications to provide the best Quality-of-Experience to its users without This thesis is the first to handle and provide a complete solution for such a complex and relevant architectural problem that is expected to touch each of our lives by enabling exciting new application use-cases that are not possible today. Also, AppFabric is a non-proprietary platform that is expected to spawn lots of innovations both in the design of the platform itself and the features it provides to applications. AppFabric still needs many iterations, both in terms of design and implementation maturity. This thesis is not the end of journey for AppFabric but rather just the beginning

    Integrated IT and SDN Orchestration of multi-domain multi-layer transport networks

    Get PDF
    Telecom operators networks' management and control remains partitioned by technology, equipment supplier and networking layer. In some segments, the network operations are highly costly due to the need of the individual, and even manual, configuration of the network equipment by highly specialized personnel. In multi-vendor networks, expensive and never ending integration processes between Network Management Systems (NMSs) and the rest of systems (OSSs, BSSs) is a common situation, due to lack of adoption of standard interfaces in the management systems of the different equipment suppliers. Moreover, the increasing impact of the new traffic flows introduced by the deployment of massive Data Centers (DCs) is also imposing new challenges that traditional networking is not ready to overcome. The Fifth Generation of Mobile Technology (5G) is also introducing stringent network requirements such as the need of connecting to the network billions of new devices in IoT paradigm, new ultra-low latency applications (i.e., remote surgery) and vehicular communications. All these new services, together with enhanced broadband network access, are supposed to be delivered over the same network infrastructure. In this PhD Thesis, an holistic view of Network and Cloud Computing resources, based on the recent innovations introduced by Software Defined Networking (SDN), is proposed as the solution for designing an end-to-end multi-layer, multi-technology and multi-domain cloud and transport network management architecture, capable to offer end-to-end services from the DC networks to customers access networks and the virtualization of network resources, allowing new ways of slicing the network resources for the forthcoming 5G deployments. The first contribution of this PhD Thesis deals with the design and validation of SDN based network orchestration architectures capable to improve the current solutions for the management and control of multi-layer, multi-domain backbone transport networks. These problems have been assessed and progressively solved by different control and management architectures which has been designed and evaluated in real evaluation environments. One of the major findings of this work has been the need of developed a common information model for transport network's management, capable to describe the resources and services of multilayer networks. In this line, the Control Orchestration Protocol (COP) has been proposed as a first contriution towards an standard management interface based on the main principles driven by SDN. Furthermore, this PhD Thesis introduces a novel architecture capable to coordinate the management of IT computing resources together with inter- and intra-DC networks. The provisioning and migration of virtual machines together with the dynamic reconfiguration of the network has been successfully demonstrated in a feasible timescale. Moreover, a resource optimization engine is introduced in the architecture to introduce optimization algorithms capable to solve allocation problems such the optimal deployment of Virtual Machine Graphs over different DCs locations minimizing the inter-DC network resources allocation. A baseline blocking probability results over different network loads are also presented. The third major contribution is the result of the previous two. With a converged cloud and network infrastructure controlled and operated jointly, the holistic view of the network allows the on-demand provisioning of network slices consisting of dedicated network and cloud resources over a distributed DC infrastructure interconnected by an optical transport network. The last chapters of this thesis discuss the management and orchestration of 5G slices based over the control and management components designed in the previous chapters. The design of one of the first network slicing architectures and the deployment of a 5G network slice in a real Testbed, is one of the major contributions of this PhD Thesis.La gestión y el control de las redes de los operadores de red (Telcos), todavía hoy, está segmentado por tecnología, por proveedor de equipamiento y por capa de red. En algunos segmentos (por ejemplo en IP) la operación de la red es tremendamente costosa, ya que en muchos casos aún se requiere con guración individual, e incluso manual, de los equipos por parte de personal altamente especializado. En redes con múltiples proveedores, los procesos de integración entre los sistemas de gestión de red (NMS) y el resto de sistemas (p. ej., OSS/BSS) son habitualmente largos y extremadamente costosos debido a la falta de adopción de interfaces estándar por parte de los diferentes proveedores de red. Además, el impacto creciente en las redes de transporte de los nuevos flujos de tráfico introducidos por el despliegue masivo de Data Centers (DC), introduce nuevos desafíos que las arquitecturas de gestión y control de las redes tradicionales no están preparadas para afrontar. La quinta generación de tecnología móvil (5G) introduce nuevos requisitos de red, como la necesidad de conectar a la red billones de dispositivos nuevos (Internet de las cosas - IoT), aplicaciones de ultra baja latencia (p. ej., cirugía a distancia) y las comunicaciones vehiculares. Todos estos servicios, junto con un acceso mejorado a la red de banda ancha, deberán ser proporcionados a través de la misma infraestructura de red. Esta tesis doctoral propone una visión holística de los recursos de red y cloud, basada en los principios introducidos por Software Defined Networking (SDN), como la solución para el diseño de una arquitectura de gestión extremo a extremo (E2E) para escenarios de red multi-capa y multi-dominio, capaz de ofrecer servicios de E2E, desde las redes intra-DC hasta las redes de acceso, y ofrecer ademas virtualización de los recursos de la red, permitiendo nuevas formas de segmentación en las redes de transporte y la infrastructura de cloud, para los próximos despliegues de 5G. La primera contribución de esta tesis consiste en la validación de arquitecturas de orquestración de red, basadas en SDN, para la gestión y control de redes de transporte troncales multi-dominio y multi-capa. Estos problemas (gestion de redes multi-capa y multi-dominio), han sido evaluados de manera incremental, mediante el diseño y la evaluación experimental, en entornos de pruebas reales, de diferentes arquitecturas de control y gestión. Uno de los principales hallazgos de este trabajo ha sido la necesidad de un modelo de información común para las interfaces de gestión entre entidades de control SDN. En esta línea, el Protocolo de Control Orchestration (COP) ha sido propuesto como interfaz de gestión de red estándar para redes SDN de transporte multi-capa. Además, en esta tesis presentamos una arquitectura capaz de coordinar la gestión de los recursos IT y red. La provisión y la migración de máquinas virtuales junto con la reconfiguración dinámica de la red, han sido demostradas con éxito en una escala de tiempo factible. Además, la arquitectura incorpora una plataforma para la ejecución de algoritmos de optimización de recursos capaces de resolver diferentes problemas de asignación, como el despliegue óptimo de Grafos de Máquinas Virtuales (VMG) en diferentes DCs que minimizan la asignación de recursos de red. Esta tesis propone una solución para este problema, que ha sido evaluada en terminos de probabilidad de bloqueo para diferentes cargas de red. La tercera contribución es el resultado de las dos anteriores. La arquitectura integrada de red y cloud presentada permite la creación bajo demanda de "network slices", que consisten en sub-conjuntos de recursos de red y cloud dedicados para diferentes clientes sobre una infraestructura común. El diseño de una de las primeras arquitecturas de "network slicing" y el despliegue de un "slice" de red 5G totalmente operativo en un Testbed real, es una de las principales contribuciones de esta tesis.La gestió i el control de les xarxes dels operadors de telecomunicacions (Telcos), encara avui, està segmentat per tecnologia, per proveïdors d’equipament i per capes de xarxa. En alguns segments (Per exemple en IP) l’operació de la xarxa és tremendament costosa, ja que en molts casos encara es requereix de configuració individual, i fins i tot manual, dels equips per part de personal altament especialitzat. En xarxes amb múltiples proveïdors, els processos d’integració entre els Sistemes de gestió de xarxa (NMS) i la resta de sistemes (per exemple, Sistemes de suport d’operacions - OSS i Sistemes de suport de negocis - BSS) són habitualment interminables i extremadament costosos a causa de la falta d’adopció d’interfícies estàndard per part dels diferents proveïdors de xarxa. A més, l’impacte creixent en les xarxes de transport dels nous fluxos de trànsit introduïts pel desplegament massius de Data Centers (DC), introdueix nous desafiaments que les arquitectures de gestió i control de les xarxes tradicionals que no estan llestes per afrontar. Per acabar de descriure el context, la cinquena generació de tecnologia mòbil (5G) també presenta nous requisits de xarxa altament exigents, com la necessitat de connectar a la xarxa milers de milions de dispositius nous, dins el context de l’Internet de les coses (IOT), o les noves aplicacions d’ultra baixa latència (com ara la cirurgia a distància) i les comunicacions vehiculars. Se suposa que tots aquests nous serveis, juntament amb l’accés millorat a la xarxa de banda ampla, es lliuraran a través de la mateixa infraestructura de xarxa. Aquesta tesi doctoral proposa una visió holística dels recursos de xarxa i cloud, basada en els principis introduïts per Software Defined Networking (SDN), com la solució per al disseny de una arquitectura de gestió extrem a extrem per a escenaris de xarxa multi-capa, multi-domini i consistents en múltiples tecnologies de transport. Aquesta arquitectura de gestió i control de xarxes transport i recursos IT, ha de ser capaç d’oferir serveis d’extrem a extrem, des de les xarxes intra-DC fins a les xarxes d’accés dels clients i oferir a més virtualització dels recursos de la xarxa, obrint la porta a noves formes de segmentació a les xarxes de transport i la infrastructura de cloud, pels propers desplegaments de 5G. La primera contribució d’aquesta tesi doctoral consisteix en la validació de diferents arquitectures d’orquestració de xarxa basades en SDN capaces de millorar les solucions existents per a la gestió i control de xarxes de transport troncals multi-domini i multicapa. Aquests problemes (gestió de xarxes multicapa i multi-domini), han estat avaluats de manera incremental, mitjançant el disseny i l’avaluació experimental, en entorns de proves reals, de diferents arquitectures de control i gestió. Un dels principals troballes d’aquest treball ha estat la necessitat de dissenyar un model d’informació comú per a les interfícies de gestió de xarxes, capaç de descriure els recursos i serveis de la xarxes transport multicapa. En aquesta línia, el Protocol de Control Orchestration (COP, en les seves sigles en anglès) ha estat proposat en aquesta Tesi, com una primera contribució cap a una interfície de gestió de xarxa estàndard basada en els principis bàsics de SDN. A més, en aquesta tesi presentem una arquitectura innovadora capaç de coordinar la gestió de els recursos IT juntament amb les xarxes inter i intra-DC. L’aprovisionament i la migració de màquines virtuals juntament amb la reconfiguració dinàmica de la xarxa, ha estat demostrat amb èxit en una escala de temps factible. A més, l’arquitectura incorpora una plataforma per a l’execució d’algorismes d’optimització de recursos, capaços de resoldre diferents problemes d’assignació, com el desplegament òptim de Grafs de Màquines Virtuals (VMG) en diferents ubicacions de DC que minimitzen la assignació de recursos de xarxa entre DC. També es presenta una solució bàsica per a aquest problema, així com els resultats de probabilitat de bloqueig per a diferents càrregues de xarxa. La tercera contribució principal és el resultat dels dos anteriors. Amb una infraestructura de xarxa i cloud convergent, controlada i operada de manera conjunta, la visió holística de la xarxa permet l’aprovisionament sota demanda de "network slices" que consisteixen en subconjunts de recursos d’xarxa i cloud, dedicats per a diferents clients, sobre una infraestructura de Data Centers distribuïda i interconnectada per una xarxa de transport òptica. Els últims capítols d’aquesta tesi tracten sobre la gestió i organització de "network slices" per a xarxes 5G en funció dels components de control i administració dissenyats i desenvolupats en els capítols anteriors. El disseny d’una de les primeres arquitectures de "network slicing" i el desplegament d’un "slice" de xarxa 5G totalment operatiu en un Testbed real, és una de les principals contribucions d’aquesta tesi.Postprint (published version
    corecore