183 research outputs found

    Effectiveness of segment routing technology in reducing the bandwidth and cloud resources provisioning times in network function virtualization architectures

    Get PDF
    Network Function Virtualization is a new technology allowing for a elastic cloud and bandwidth resource allocation. The technology requires an orchestrator whose role is the service and resource orchestration. It receives service requests, each one characterized by a Service Function Chain, which is a set of service functions to be executed according to a given order. It implements an algorithm for deciding where both to allocate the cloud and bandwidth resources and to route the SFCs. In a traditional orchestration algorithm, the orchestrator has a detailed knowledge of the cloud and network infrastructures and that can lead to high computational complexity of the SFC Routing and Cloud and Bandwidth resource Allocation (SRCBA) algorithm. In this paper, we propose and evaluate the effectiveness of a scalable orchestration architecture inherited by the one proposed within the European Telecommunications Standards Institute (ETSI) and based on the functional separation of an NFV orchestrator in Resource Orchestrator (RO) and Network Service Orchestrator (NSO). Each cloud domain is equipped with an RO whose task is to provide a simple and abstract representation of the cloud infrastructure. These representations are notified of the NSO that can apply a simplified and less complex SRCBA algorithm. In addition, we show how the segment routing technology can help to simplify the SFC routing by means of an effective addressing of the service functions. The scalable orchestration solution has been investigated and compared to the one of a traditional orchestrator in some network scenarios and varying the number of cloud domains. We have verified that the execution time of the SRCBA algorithm can be drastically reduced without degrading the performance in terms of cloud and bandwidth resource costs

    A multi-criteria decision making approach for scaling and placement of virtual network functions

    Get PDF
    This paper investigates the joint scaling and placement problem of network services made up of virtual network functions (VNFs) that can be provided inside a cluster managing multiple points of presence (PoPs). Aiming at increasing the VNF service satisfaction rates and minimizing the deployment cost, we use both transport and cloud-aware VNF scaling as well as multi-attribute decision making (MADM) algorithms for VNF placement inside the cluster. The original joint scaling and placement problem is known to be NP-hard and hence the problem is solved by separating scaling and placement problems and solving them individually. The experiments are done using a dataset containing the information of a deployed digital-twin network service. These experiments show that considering transport and cloud parameters during scaling and placement algorithms perform more efficiently than the only cloud based or transport based scaling followed by placement algorithms. One of the MADM algorithms, Total Order Preference by Similarity to the Ideal Solution (TOPSIS), has shown to yield the lowest deployment cost and highest VNF request satisfaction rates compared to only transport or cloud scaling and other investigated MADM algorithms. Our simulation results indicate that considering both transport and cloud parameters in various availability scenarios of cloud and transport resources has significant potential to provide increased request satisfaction rates when VNF scaling and placement using the TOPSIS scheme is performed.This work was partially funded by EC H2020 5GPPP 5Growth Project (Grant 856709), Spanish MINECO Grant TEC2017-88373-R (5G-REFINE), Generalitat de Catalunya Grant 2017 SGR 1195 and the National Program on Equipment and Scientifc and Technical Infrastructure, EQC2018-005257-P under the European Regional Development Fund (FEDER). We would also like to thank Milan Groshev, Carlos Guimarães for providing dataset for scaling of robot manipulator based digital twin service

    Dynamic service chain composition in virtualised environment

    Get PDF
    Network Function Virtualisation (NFV) has contributed to improving the flexibility of network service provisioning and reducing the time to market of new services. NFV leverages the virtualisation technology to decouple the software implementation of network appliances from the physical devices on which they run. However, with the emergence of this paradigm, providing data centre applications with an adequate network performance becomes challenging. For instance, virtualised environments cause network congestion, decrease the throughput and hurt the end user experience. Moreover, applications usually communicate through multiple sequences of virtual network functions (VNFs), aka service chains, for policy enforcement and performance and security enhancement, which increases the management complexity at to the network level. To address this problematic situation, existing studies have proposed high-level approaches of VNFs chaining and placement that improve service chain performance. They consider the VNFs as homogenous entities regardless of their specific characteristics. They have overlooked their distinct behaviour toward the traffic load and how their underpinning implementation can intervene in defining resource usage. Our research aims at filling this gap by finding out particular patterns on production and widely used VNFs. And proposing a categorisation that helps in reducing network latency at the chains. Based on experimental evaluation, we have classified firewalls, NAT, IDS/IPS, Flow monitors into I/O- and CPU-bound functions. The former category is mainly sensitive to the throughput, in packets per second, while the performance of the latter is primarily affected by the network bandwidth, in bits per second. By doing so, we correlate the VNF category with the traversing traffic characteristics and this will dictate how the service chains would be composed. We propose a heuristic called Natif, for a VNF-Aware VNF insTantIation and traFfic distribution scheme, to reconcile the discrepancy in VNF requirements based on the category they belong to and to eventually reduce network latency. We have deployed Natif in an OpenStack-based environment and have compared it to a network-aware VNF composition approach. Our results show a decrease in latency by around 188% on average without sacrificing the throughput

    Routing optimization algorithms in integrated fronthaul/backhaul networks supporting multitenancy

    Get PDF
    Mención Internacional en el título de doctorEsta tesis pretende ayudar en la definición y el diseño de la quinta generación de redes de telecomunicaciones (5G) a través del modelado matemático de las diferentes cualidades que las caracterizan. En general, la ambición de estos modelos es realizar una optimización de las redes, ensalzando sus capacidades recientemente adquiridas para mejorar la eficiencia de los futuros despliegues tanto para los usuarios como para los operadores. El periodo de realización de esta tesis se corresponde con el periodo de investigación y definición de las redes 5G, y, por lo tanto, en paralelo y en el contexto de varios proyectos europeos del programa H2020. Por lo tanto, las diferentes partes del trabajo presentado en este documento cuadran y ofrecen una solución a diferentes retos que han ido apareciendo durante la definición del 5G y dentro del ámbito de estos proyectos, considerando los comentarios y problemas desde el punto de vista de todos los usuarios finales, operadores y proveedores. Así, el primer reto a considerar se centra en el núcleo de la red, en particular en cómo integrar tráfico fronthaul y backhaul en el mismo estrato de transporte. La solución propuesta es un marco de optimización para el enrutado y la colocación de recursos que ha sido desarrollado teniendo en cuenta restricciones de retardo, capacidad y caminos, maximizando el grado de despliegue de Unidades Distribuidas (DU) mientras se minimizan los agregados de las Unidades Centrales (CU) que las soportan. El marco y los algoritmos heurísticos desarrollados (para reducir la complexidad computacional) son validados y aplicados a redes tanto a pequeña como a gran (nivel de producción) escala. Esto los hace útiles para los operadores de redes tanto para la planificación de la red como para el ajuste dinámico de las operaciones de red en su infraestructura (virtualizada). Moviéndonos más cerca de los usuarios, el segundo reto considerado se centra en la colocación de servicios en entornos de nube y borde (cloud/edge). En particular, el problema considerado consiste en seleccionar la mejor localización para cada función de red virtual (VNF) que compone un servicio en entornos de robots en la nube, que implica restricciones estrictas en las cotas de retardo y fiabilidad. Los robots, vehículos y otros dispositivos finales proveen competencias significativas como impulsores, sensores y computación local que son esenciales para algunos servicios. Por contra, estos dispositivos están en continuo movimiento y pueden perder la conexión con la red o quedarse sin batería, cosa que reta aún más la entrega de servicios en este entorno dinámico. Así, el análisis realizado y la solución propuesta abordan las restricciones de movilidad y batería. Además, también se necesita tener en cuenta los aspectos temporales y los objetivos conflictivos de fiabilidad y baja latencia en el despliegue de servicios en una red volátil, donde los nodos de cómputo móviles actúan como una extensión de la infraestructura de cómputo de la nube y el borde. El problema se formula como un problema de optimización para colocación de VNFs minimizando el coste y también se propone un heurístico eficiente. Los algoritmos son evaluados de forma extensiva desde varios aspectos por simulación en escenarios que reflejan la realidad de forma detallada. Finalmente, el último reto analizado se centra en dar soporte a servicios basados en el borde, en particular, aprendizaje automático (ML) en escenarios del Internet de las Cosas (IoT) distribuidos. El enfoque tradicional al ML distribuido se centra en adaptar los algoritmos de aprendizaje a la red, por ejemplo, reduciendo las actualizaciones para frenar la sobrecarga. Las redes basadas en el borde inteligente, en cambio, hacen posible seguir un enfoque opuesto, es decir, definir la topología de red lógica alrededor de la tarea de aprendizaje a realizar, para así alcanzar el resultado de aprendizaje deseado. La solución propuesta incluye un modelo de sistema que captura dichos aspectos en el contexto de ML supervisado, teniendo en cuenta tanto nodos de aprendizaje (que realizan las computaciones) como nodos de información (que proveen datos). El problema se formula para seleccionar (i) qué nodos de aprendizaje e información deben cooperar para completar la tarea de aprendizaje, y (ii) el número de iteraciones a realizar, para minimizar el coste de aprendizaje mientras se garantizan los objetivos de error predictivo y tiempo de ejecución. La solución también incluye un algoritmo heurístico que es evaluado ensalzando una topología de red real y considerando tanto las tareas de clasificación como de regresión, y cuya solución se acerca mucho al óptimo, superando las soluciones alternativas encontradas en la literatura.This thesis aims to help in the definition and design of the 5th generation of telecommunications networks (5G) by modelling the different features that characterize them through several mathematical models. Overall, the aim of these models is to perform a wide optimization of the network elements, leveraging their newly-acquired capabilities in order to improve the efficiency of the future deployments both for the users and the operators. The timeline of this thesis corresponds to the timeline of the research and definition of 5G networks, and thus in parallel and in the context of several European H2020 programs. Hence, the different parts of the work presented in this document match and provide a solution to different challenges that have been appearing during the definition of 5G and within the scope of those projects, considering the feedback and problems from the point of view of all the end users, operators and providers. Thus, the first challenge to be considered focuses on the core network, in particular on how to integrate fronthaul and backhaul traffic over the same transport stratum. The solution proposed is an optimization framework for routing and resource placement that has been developed taking into account delay, capacity and path constraints, maximizing the degree of Distributed Unit (DU) deployment while minimizing the supporting Central Unit (CU) pools. The framework and the developed heuristics (to reduce the computational complexity) are validated and applied to both small and largescale (production-level) networks. They can be useful to network operators for both network planning as well as network operation adjusting their (virtualized) infrastructure dynamically. Moving closer to the user side, the second challenge considered focuses on the allocation of services in cloud/edge environments. In particular, the problem tackled consists of selecting the best the location of each Virtual Network Function (VNF) that compose a service in cloud robotics environments, that imply strict delay bounds and reliability constraints. Robots, vehicles and other end-devices provide significant capabilities such as actuators, sensors and local computation which are essential for some services. On the negative side, these devices are continuously on the move and might lose network connection or run out of battery, which further challenge service delivery in this dynamic environment. Thus, the performed analysis and proposed solution tackle the mobility and battery restrictions. We further need to account for the temporal aspects and conflicting goals of reliable, low latency service deployment over a volatile network, where mobile compute nodes act as an extension of the cloud and edge computing infrastructure. The problem is formulated as a cost-minimizing VNF placement optimization and an efficient heuristic is proposed. The algorithms are extensively evaluated from various aspects by simulation on detailed real-world scenarios. Finally, the last challenge analyzed focuses on supporting edge-based services, in particular, Machine Learning (ML) in distributed Internet of Things (IoT) scenarios. The traditional approach to distributed ML is to adapt learning algorithms to the network, e.g., reducing updates to curb overhead. Networks based on intelligent edge, instead, make it possible to follow the opposite approach, i.e., to define the logical network topology around the learning task to perform, so as to meet the desired learning performance. The proposed solution includes a system model that captures such aspects in the context of supervised ML, accounting for both learning nodes (that perform computations) and information nodes (that provide data). The problem is formulated to select (i) which learning and information nodes should cooperate to complete the learning task, and (ii) the number of iterations to perform, in order to minimize the learning cost while meeting the target prediction error and execution time. The solution also includes an heuristic algorithm that is evaluated leveraging a real-world network topology and considering both classification and regression tasks, and closely matches the optimum, outperforming state-of-the-art alternatives.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Pablo Serrano Yáñez-Mingot.- Secretario: Andrés García Saavedra.- Vocal: Luca Valcarengh

    NFV Platforms: Taxonomy, Design Choices and Future Challenges

    Get PDF
    Due to the intrinsically inefficient service provisioning in traditional networks, Network Function Virtualization (NFV) keeps gaining attention from both industry and academia. By replacing the purpose-built, expensive, proprietary network equipment with software network functions consolidated on commodity hardware, NFV envisions a shift towards a more agile and open service provisioning paradigm. During the last few years, a large number of NFV platforms have been implemented in production environments that typically face critical challenges, including the development, deployment, and management of Virtual Network Functions (VNFs). Nonetheless, just like any complex system, such platforms commonly consist of abounding software and hardware components and usually incorporate disparate design choices based on distinct motivations or use cases. This broad collection of convoluted alternatives makes it extremely arduous for network operators to make proper choices. Although numerous efforts have been devoted to investigating different aspects of NFV, none of them specifically focused on NFV platforms or attempted to explore their design space. In this paper, we present a comprehensive survey on the NFV platform design. Our study solely targets existing NFV platform implementations. We begin with a top-down architectural view of the standard reference NFV platform and present our taxonomy of existing NFV platforms based on what features they provide in terms of a typical network function life cycle. Then we thoroughly explore the design space and elaborate on the implementation choices each platform opts for. We also envision future challenges for NFV platform design in the incoming 5G era. We believe that our study gives a detailed guideline for network operators or service providers to choose the most appropriate NFV platform based on their respective requirements. Our work also provides guidelines for implementing new NFV platforms
    corecore