353 research outputs found

    Flexible architecture for the future internet scalability of SDN control plane

    Get PDF
    Software-Defined Networking (SDN) separates the control plane from the data plane. The initial SDN approach involves a single centralized controller, which may not scale properly as a network grows in size. Distributed controllers have emerged to address the disadvantages of a single centralized controller. The control architecture needs to be distributed with traffic control between switches and controllers and among the controllers in order to allow SDNs for several thousand switches. One of the most significant research challenges for distributed controller architectures is to effectively manage controllers, which includes allocating enough controllers to appropriate network locations. To address these daunting issues, we make the following major contributions: This thesis expands the method of solving the Control Placement Problem (CPP) based on the K-means and K-center algorithms to include a Hierarchical Controller Placement Problem (HCPP), located at a high level of Super Controller (SC), a middle level of Master Controllers (MCs), and the lowest level of domain controllers (DCs). The optimization metric addresses latency between the controller and the switches assigned to it.. The proposed architecture and methodology are implemented using the topology of Western European NRENs from the Internet Topology Zoo. The entire network topology is divided into clusters, and the optimal number of controllers (DCs) and their placement are determined for each cluster. MC placement optimization determines the optimal number of MCs and their optimal placement. As a second contribution, an accumulated latency is defined to solve CPP, which takes into account both the latency between the controller and its associated switches and the latency between controllers. Under the constraint of latency, an optimization problem is formulated as per mixed-integer linear programming (MILP). The goal of the research is to reduce accumulated latency while also reducing the number of network controllers and optimizing their placement to achieve an optimal balance. The performance of the developed method is evaluated on Internet2 OS3E real network topology. To achieve the third objective, a metric was developed that includes reliability. The communication latency between controllers should also be considered because a low controller-switch delay does not always imply a short controller-controller delay for a particular controller placement. As the third contribution, we propose a novel metric for CPP to improve the reliability of controllers that takes into account both communication latency and communication reliability between switches and controllers, as well as between controllers. When a single link fails, reliability is taken into account. This aspect concluded by identifying the optimal controller placement to achieve low latencies in control plane traffic. The goal of this project is to reduce the average latency. As the fourth contribution, this study evaluates the Joint Latency and Reliability-aware Controller Placement (LRCP) optimization model. As the evaluation metric, control plane latency (CPL) is defined as the sum of the average switch-to-controller latency and average inter-controller latency. The latency of the control plane, utilizing the actual latencies of the real network topology, is calculated for every optimum placement in the network. In the case of a failure of the single link, the actual CPL for LRCP placements is calculated and evaluated to determine how good LRCP placements are. CPL metrics are used to compare latency and reliability metrics with other models. This study provides proof that the developed methodologies for large-scale networks are highly powerful in terms of searching for all feasible controller placements while assessing the outcomes. In addition, compared to previous work including latency among controllers and reliability for an event of single-link failure.La xarxa definida per programari (SDN) separa el pla de control del pla de dades. L’enfocament SDN inicial implica un únic controlador centralitzat, que pot no escalar correctament a mesura que la xarxa creixi de mida. Els controladors distribuïts han sorgit per abordar els inconvenients d’un únic controlador centralitzat. . Un dels reptes de recerca més importants per a les arquitectures de controladors distribuïts és gestionar de manera eficaç els controladors, que inclou l’assignació de controladors suficients a les ubicacions de xarxa adequades. Per abordar aquests problemes, fem les següents contribucions. Aquesta tesi amplia el mètode de resolució del Problema de Col·locació de Control (CPP) basat en els algorismes de K-means i K-center per incloure un Problema de Col·locació de Controladors Jeràrquics (HCPP), situat a un nivell alt de Super Controller (SC), un nivell de controladors mestres (MC) i el nivell més baix de controladors de domini (DC). La mètrica d’optimització és la latència entre el controlador i els commutadors assignats a aquest. L’arquitectura i la metodologia proposades s’implementen utilitzant la topologia de NREN d’Europa occidental de l’Internet Topology Zoo. La topologia de la xarxa es divideix en clústers i es determina el nombre òptim de controladors de domini (DC) i la seva ubicació per a cada clúster. L’optimització de la ubicació de MC determina el nombre òptim de MC i la seva col·locació òptima. Com a segona contribució, es defineix una latència acumulada per resoldre el CPP, que té en compte tant la latència entre el controlador i els seus commutadors associats com la latència entre controladors. Sota la restricció de la latència, es formula un problema d’optimització segons la programació lineal de nombres enters mixts (MILP). L’objectiu de la investigació és reduir la latència acumulada alhora que es redueix el nombre de controladors de xarxa i optimitza la seva col·locació per aconseguir un equilibri òptim. El rendiment del mètode desenvolupat s’avalua en la topologia de xarxa real d’Internet2 OS3E. Per aconseguir el tercer objectiu, es va desenvolupar una mètrica que inclou la fiabilitat. També s’ha de tenir en compte la latència de comunicació entre controladors perquè un retard baix entre el commutador i el controlador no sempre implica un retard curt del controladorcontrolador per a una ubicació concreta dels controladors. Com a tercera contribució, proposem una nova mètrica per al CPP per millorar la fiabilitat dels controladors que tingui en compte tant la latència de la comunicació com la fiabilitat de la comunicació entre commutadors i controladors, així com entre controladors. La fiabilitat es té en compte quan falla un únic enllaç identificant la col·locació òptima dels controladors per aconseguir baixes latències en el trànsit del pla de control. L’objectiu d’aquest projecte és reduir la latència mitjana. Com a quarta contribució, aquest estudi avalua el model d’optimització Joint Latency and Reliability-aware Controller Placement (LRCP). Com a mètrica d’avaluació, la latència del pla de control (CPL) es defineix com la suma de la latència mitjana de commutador a controlador i la latència mitjana entre controladors. La latència del pla de control, utilitzant les latències reals de la topologia de xarxa real, es calcula per a cada col·locació òptima a la xarxa. En el cas d’una fallida en un únicenllaç, es calcula i s’avalua el CPL real de les ubicacions LRCP per determinar com de bones són les ubicacions LRCP. Les mètriques CPL s’utilitzen per comparar les mètriques de latència i fiabilitat amb altres models. Aquest estudi proporciona la prova que les metodologies desenvolupades per a xarxes a gran escala són molt potents pel que fa a la recerca de totes les ubicacions de controladors factibles mentre s’avaluen els resultats. A més, en comparació amb el treball anterior, inclou la latència entre els controladors i la fiabilitat per a un esdeveniment de fallada d’un enllaç únic.Las redes definidas por software (SDN) separan el plano de control del plano de datos. El enfoque inicial de SDN implica un único controlador centralizado, que puede no escalar adecuadamente a medida que una red crece en tamaño. Los controladores distribuidos han surgido para abordar las desventajas de un único controlador centralizado. Uno de los retos de investigación más importantes para las arquitecturas de controladores distribuidos es la gestión eficaz de los controladores, que incluye la asignación de suficientes controladores en las ubicaciones adecuadas. Para hacer frente a estos problemas, realizamos las siguientes contribuciones principales: Esta tesis amplía el método de resolución del Problema de Colocación de Controles (CPP) basado en los algoritmos K-means y K-center para incluir un Problema de Colocación de Controladores Jerárquicos (HCPP), situado en un nivel alto de Super-controladores (SC), un nivel medio de Controladores Maestros (MC), y el nivel más bajo de controladores de dominio (DC). La métrica de optimización es la latencia entre el controlador y los conmutadores asignados al mismo. . La arquitectura y la metodología propuestas se implementan utilizando la topología de las NREN de Europa Occidental del TopologyZoo. La topología completa de la red se divide en clústeres, y se determina el número óptimo de controladores de dominio (CD) y su colocación para cada clúster. La optimización de la colocación de los MC determina el número óptimo de MC y su colocación óptima. Como segunda contribución, se define una latencia acumulada para resolver el CPP, que tiene en cuenta tanto la latencia entre el controlador y sus conmutadores asociados como la latencia entre los controladores. Bajo la restricción de la latencia, se formula un problema de optimización según la programación lineal de enteros mixtos (MILP). El objetivo es reducir la latencia acumulada al tiempo que se reduce el número de controladores de la red y se optimiza su ubicación para lograr un equilibrio óptimo. El rendimiento del método desarrollado se evalúa en la topología de Internet2 OS3E. Para lograr el tercer objetivo, se desarrolló una métrica que incluye la fiabilidad. La latencia de la comunicación entre controladores también debe tenerse en cuenta, ya que un bajo retardo entre controladores y conmutadores no siempre implica un corto retardo entre controladores para una determinada ubicación de los mismos. Como tercera contribución proponemos una nueva métrica para el CPP para mejorar la fiabilidad de los controladores que tiene en cuenta tanto la latencia de la comunicación como la fiabilidad de la comunicación entre los conmutadores y los controladores, así como entre los controladores. Se tiene en cuenta la fiabilidad cuando falla un solo enlace. Este aspecto concluye con la identificación de la ubicación óptima de los controladores para lograr bajas latencias en el tráfico del plano de control. El objetivo es reducir la latencia media. Como cuarta contribución, este estudio evalúa el modelo de optimización Joint Latency and Reliability-aware Controller Placement (LRCP). Como métrica de evaluación, la latencia del plano de control (CPL) se define como la suma de la latencia media entre conmutadores y controladores y la latencia media entre controladores. La latencia del plano de control, utilizando las latencias reales de la topología de la red, se calcula para cada ubicación óptima en la red. En el caso de un fallo de un enlace, se calcula y evalúa la CPL real para las colocaciones de LRCP con el fin de determinar lo buenas que son las colocaciones de LRCP. Las métricas CPL se utilizan para comparar las métricas de latencia y fiabilidad con otros modelos. Este estudio demuestra que las metodologías desarrolladas para redes a gran escala son muy potentes en cuanto a la búsqueda de todas las ubicaciones factibles de los controladores mientras se evalúan los resultados. Además, en comparación con los trabajos anteriores, que incluyen la latencia entre controladores y la fiabilidad para un caso de fallo de un solo enlacePostprint (published version

    Real-Time Containers: A Survey

    Get PDF
    Container-based virtualization has gained a significant importance in a deployment of software applications in cloud-based environments. The technology fully relies on operating system features and does not require a virtualization layer (hypervisor) that introduces a performance degradation. Container-based virtualization allows to co-locate multiple isolated containers on a single computation node as well as to decompose an application into multiple containers distributed among several hosts (e.g., in fog computing layer). Such a technology seems very promising in other domains as well, e.g., in industrial automation, automotive, and aviation industry where mixed criticality containerized applications from various vendors can be co-located on shared resources. However, such industrial domains often require real-time behavior (i.e, a capability to meet predefined deadlines). These capabilities are not fully supported by the container-based virtualization yet. In this work, we provide a systematic literature survey study that summarizes the effort of the research community on bringing real-time properties in container-based virtualization. We categorize existing work into main research areas and identify possible immature points of the technology

    Temporal Isolation Among LTE/5G Network Functions by Real-time Scheduling

    Get PDF
    Radio access networks for future LTE/5G scenarios need to be designed so as to satisfy increasingly stringent requirements in terms of overall capacity, individual user performance, flexibility and power efficiency. This is triggering a major shift in the Telcom industry from statically sized, physically provisioned network appliances towards the use of virtualized network functions that can be elastically deployed within a flexible private cloud of network operators. However, a major issue in delivering strong QoS levels is the one to keep in check the temporal interferences among co-located services, as they compete in accessing shared physical resources. In this paper, this problem is tackled by proposing a solution making use of a real-time scheduler with strong temporal isolation guarantees at the OS/kernel level. This allows for the development of a mathematical model linking major parameters of the system configuration and input traffic characterization with the achieved performance and response-time probabilistic distribution. The model is verified through extensive experiments made on Linux on a synthetic benchmark tuned according to data from a real LTE packet processing scenario

    Strong Temporal Isolation among Containers in OpenStack for NFV Services

    Get PDF
    In this paper, the problem of temporal isolation among containerized software components running in shared cloud infrastructures is tackled, proposing an approach based on hierarchical real-time CPU scheduling. This allows for reserving a precise share of the available computing power for each container deployed in a multi-core server, so to provide it with a stable performance, independently from the load of other co-located containers. The proposed technique enables the use of reliable modeling techniques for end-to-end service chains that are effective in controlling the application-level performance. An implementation of the technique within the well-known OpenStack cloud orchestration software is presented, focusing on a use-case framed in the context of network function virtualization. The modified OpenStack is capable of leveraging the special real-time scheduling features made available in the underlying Linux operating system through a patch to the in-kernel process scheduler. The effectiveness of the technique is validated by gathering performance data from two applications running in a real test-bed with the mentioned modifications to OpenStack and the Linux kernel. A performance model is developed that tightly models the application behavior under a variety of conditions. Extensive experimentation shows that the proposed mechanism is successful in guaranteeing isolation of individual containerized activities on the platform

    Framework for Virtualized Network Functions (VNFs) in Cloud of Things Based on Network Traffic Services

    Get PDF
    The cloud of things (CoT), which combines the Internet of Things (IoT) and cloud computing, may offer Virtualized Network Functions (VNFs) for IoT devices on a dynamic basis based on service-specific requirements. Although the provisioning of VNFs in CoT is described as an online decision-making problem, most widely used techniques primarily focus on defining the environment using simple models in order to discover the optimum solution. This leads to inefficient and coarse-grained provisioning since the Quality of Service (QoS) requirements for different types of CoT services are not considered, and important historical experience on how to provide for the best long-term benefits is disregarded. This paper suggests a methodology for providing VNFs intelligently in order to schedule adaptive CoT resources in line with the detection of traffic from diverse network services. The system makes decisions based on Deep Reinforcement Learning (DRL) based models that take into account the complexity of network configurations and traffic changes. To obtain stable performance in this model, a special surrogate objective function and a policy gradient DRL method known as Policy Optimisation using Kronecker-Factored Trust Region (POKTR) are utilised. The assertion that our strategy improves CoT QoS through real-time VNF provisioning is supported by experimental results. The POKTR algorithm-based DRL-based model maximises throughput while minimising network congestion compared to earlier DRL algorithms

    Toward Optimal Resource Allocation of Virtualized Network Functions for Hierarchical Datacenters

    Get PDF
    Telecommunications service providers (TSPs) previously provided network functions to end users with dedicated hardware, but they are resorting to virtualized infrastructure for reducing costs and increasing flexibility in resource allocation. A representative case is the Central Office Re-architected as Datacenter (CORD) project from AT&T, which aims to deploy virtualized network functions (VNFs) to over 4000 central offices (COs) across the U.S. However, there is a wide spectrum of options for deploying VNFs over the COs, varying from highly distributed to highly centralized manners. The former benefits end users with short response time but has its inherent limitation on utilizing geographically dispersed resources, while the latter allows resources to be better utilized at a cost of longer response time. In this work, we model the TSP's virtualized infrastructure as hierarchical datacenters, namely hierarchical CORD, and provide a resource allocation solution to strike the optimal balance between the two extreme options. Our evaluations reveal that in general, the 3-tier architecture incurs the least cost in case of deploying VNFs under moderate or loose delay constraints. Furthermore, the margin of improvement on the resource allocation cost increases inversely with the overall system utilization rate. Our results also suggest that as heavy request load overwhelms the network infrastructure, the relevant VNFs shall be migrated to lower-tier edge datacenters or to some nearby datacenters with superior network capacity. The evaluations also demonstrate that the proposed model allows highly adaptive VNF deployment in the hierarchical architecture under various conditions.This work was supported in part by H2020 Collaborative Europe/Taiwan Research Project 5G-CORAL under Grant 761586, and in part by the Ministry of Science and Technology, Taiwan, under Grant MOST-106-2218-E-009-018 and Grant MOST-106-2221-E-194-021-MY3

    Hybrid SDN Evolution: A Comprehensive Survey of the State-of-the-Art

    Full text link
    Software-Defined Networking (SDN) is an evolutionary networking paradigm which has been adopted by large network and cloud providers, among which are Tech Giants. However, embracing a new and futuristic paradigm as an alternative to well-established and mature legacy networking paradigm requires a lot of time along with considerable financial resources and technical expertise. Consequently, many enterprises can not afford it. A compromise solution then is a hybrid networking environment (a.k.a. Hybrid SDN (hSDN)) in which SDN functionalities are leveraged while existing traditional network infrastructures are acknowledged. Recently, hSDN has been seen as a viable networking solution for a diverse range of businesses and organizations. Accordingly, the body of literature on hSDN research has improved remarkably. On this account, we present this paper as a comprehensive state-of-the-art survey which expands upon hSDN from many different perspectives

    Beyond 5G Networks: Integration of Communication, Computing, Caching, and Control

    Get PDF
    In recent years, the exponential proliferation of smart devices with their intelligent applications poses severe challenges on conventional cellular networks. Such challenges can be potentially overcome by integrating communication, computing, caching, and control (i4C) technologies. In this survey, we first give a snapshot of different aspects of the i4C, comprising background, motivation, leading technological enablers, potential applications, and use cases. Next, we describe different models of communication, computing, caching, and control (4C) to lay the foundation of the integration approach. We review current state-of-the-art research efforts related to the i4C, focusing on recent trends of both conventional and artificial intelligence (AI)-based integration approaches. We also highlight the need for intelligence in resources integration. Then, we discuss integration of sensing and communication (ISAC) and classify the integration approaches into various classes. Finally, we propose open challenges and present future research directions for beyond 5G networks, such as 6G.Comment: This article has been accepted for inclusion in a future issue of China Communications Journal in IEEE Xplor
    corecore