136 research outputs found

    In the Direction of Service Guarantees for Virtualized Network Functions

    Get PDF
    The trend of consolidating network functions from specialized hardware to software running on virtualization servers brings significant advantages for reducing costs and simplifying service deployment. However, virtualization techniques have significant limitations when it comes to networking as there is no support for guaranteeing that network functions meet their service requirements. In this paper, we present a design for providing service guarantees to virtualized network functions based on rate control. The design is a combination of rate regulation through token bucket filters and the regular scheduling mechanisms in operating systems. It has the attractive property that traffic profiles are maintained throughout a series of network functions, which makes it well suited for service function chaining. We discuss implementation alternatives for the design and demonstrate how it can be implemented on two virtualization platforms: LXC containers and the KVM hypervisor. To evaluate the design, we conduct experiments where we measure throughput and latency using IP forwarders (routers) as examples of virtual network functions. Two significant factors for performance are investigated: the design of token buckets and the packet clustering effect that comes from scheduling. Finally, we demonstrate how performance guarantees are achieved for rate-controlled virtual routers under different scenarios.publishedVersio

    5G Infrastructure Network Slicing: E2E Mean Delay Model and Effectiveness Assessment to Reduce Downtimes in Industry 4.0

    Get PDF
    This work has been partially funded by the H2020 project 5G-CLARITY (Grant No. 871428) and the Spanish national project TRUE-5G (PID2019-108713RB-C53).Fifth Generation (5G) is expected to meet stringent performance network requisites of the Industry 4.0. Moreover, its built-in network slicing capabilities allow for the support of the traffic heterogeneity in Industry 4.0 over the same physical network infrastructure. However, 5G network slicing capabilities might not be enough in terms of degree of isolation for many private 5G networks use cases, such as multi-tenancy in Industry 4.0. In this vein, infrastructure network slicing, which refers to the use of dedicated and well isolated resources for each network slice at every network domain, fits the necessities of those use cases. In this article, we evaluate the effectiveness of infrastructure slicing to provide isolation among production lines (PLs) in an industrial private 5G network. To that end, we develop a queuing theory-based model to estimate the end-to-end (E2E) mean packet delay of the infrastructure slices. Then, we use this model to compare the E2E mean delay for two configurations, i.e., dedicated infrastructure slices with segregated resources for each PL against the use of a single shared infrastructure slice to serve the performance-sensitive traffic from PLs. Also we evaluate the use of Time-Sensitive Networking (TSN) against bare Ethernet to provide layer 2 connectivity among the 5G system components. We use a complete and realistic setup based on experimental and simulation data of the scenario considered. Our results support the effectiveness of infrastructure slicing to provide isolation in performance among the different slices. Then, using dedicated slices with segregated resources for each PL might reduce the number of the production downtimes and associated costs as the malfunctioning of a PL will not affect the network performance perceived by the performance-sensitive traffic from other PLs. Last, our results show that, besides the improvement in performance, TSN technology truly provides full isolation in the transport network compared to standard Ethernet thanks to traffic prioritization, traffic regulation, and bandwidth reservation capabilities.H2020 project 5G-CLARITY 871428Spanish Government PID2019-108713RB-C53TRUE-5

    Deployment and management of SDR cloud computing resources: problem definition and fundamental limits

    Get PDF
    Software-defined radio (SDR) describes radio transceivers implemented in software that executes on general-purpose hardware. SDR combined with cloud computing technology will reshape the wireless access infrastructure, enabling computing resource sharing and centralized digital-signal processing (DSP). SDR clouds have different constraints than general-purpose grids or clouds: real-time response to user session requests and real-time execution of the corresponding DSP chains. This article addresses the SDR cloud computing resource management problem. We show that the maximum traffic load that a single resource allocator (RA) can handle is limited. It is a function of the RA complexity and the call setup delay and user blocking probability constraints. We derive the RA capacity analytically and provide numerical examples. The analysis demonstrates the fundamental tradeoffs between short call setup delays (few processors) and low blocking probability (many processors). The simulation results demonstrate the feasibility of a distributed resource management and the necessity of adapting the processor assignment to RAs according to the given traffic load distribution. These results provide new insights and guidelines for designing data centers and distributed resource management methods for SDR clouds.Peer ReviewedPostprint (published version

    Learning Augmented Optimization for Network Softwarization in 5G

    Get PDF
    The rapid uptake of mobile devices and applications are posing unprecedented traffic burdens on the existing networking infrastructures. In order to maximize both user experience and investment return, the networking and communications systems are evolving to the next gen- eration – 5G, which is expected to support more flexibility, agility, and intelligence towards provisioned services and infrastructure management. Fulfilling these tasks is challenging, as nowadays networks are increasingly heterogeneous, dynamic and expanded with large sizes. Network softwarization is one of the critical enabling technologies to implement these requirements in 5G. In addition to these problems investigated in preliminary researches about this technology, many new emerging application requirements and advanced opti- mization & learning technologies are introducing more challenges & opportunities for its fully application in practical production environment. This motivates this thesis to develop a new learning augmented optimization technology, which merges both the advanced opti- mization and learning techniques to meet the distinct characteristics of the new application environment. To be more specific, the abstracts of the key contents in this thesis are listed as follows: • We first develop a stochastic solution to augment the optimization of the Network Function Virtualization (NFV) services in dynamical networks. In contrast to the dominant NFV solutions applied for the deterministic networking environments, the inherent network dynamics and uncertainties from 5G infrastructure are impeding the rollout of NFV in many emerging networking applications. Therefore, Chapter 3 investigates the issues of network utility degradation when implementing NFV in dynamical networks, and proposes a robust NFV solution with full respect to the underlying stochastic features. By exploiting the hierarchical decision structures in this problem, a distributed computing framework with two-level decomposition is designed to facilitate a distributed implementation of the proposed model in large-scale networks. • Next, Chapter 4 aims to intertwin the traditional optimization and learning technologies. In order to reap the merits of both optimization and learning technologies but avoid their limitations, promissing integrative approaches are investigated to combine the traditional optimization theories with advanced learning methods. Subsequently, an online optimization process is designed to learn the system dynamics for the network slicing problem, another critical challenge for network softwarization. Specifically, we first present a two-stage slicing optimization model with time-averaged constraints and objective to safeguard the network slicing operations in time-varying networks. Directly solving an off-line solution to this problem is intractable since the future system realizations are unknown before decisions. To address this, we combine the historical learning and Lyapunov stability theories, and develop a learning augmented online optimization approach. This facilitates the system to learn a safe slicing solution from both historical records and real-time observations. We prove that the proposed solution is always feasible and nearly optimal, up to a constant additive factor. Finally, simulation experiments are also provided to demonstrate the considerable improvement of the proposals. • The success of traditional solutions to optimizing the stochastic systems often requires solving a base optimization program repeatedly until convergence. For each iteration, the base program exhibits the same model structure, but only differing in their input data. Such properties of the stochastic optimization systems encourage the work of Chapter 5, in which we apply the latest deep learning technologies to abstract the core structures of an optimization model and then use the learned deep learning model to directly generate the solutions to the equivalent optimization model. In this respect, an encoder-decoder based learning model is developed in Chapter 5 to improve the optimization of network slices. In order to facilitate the solving of the constrained combinatorial optimization program in a deep learning manner, we design a problem-specific decoding process by integrating program constraints and problem context information into the training process. The deep learning model, once trained, can be used to directly generate the solution to any specific problem instance. This avoids the extensive computation in traditional approaches, which re-solve the whole combinatorial optimization problem for every instance from the scratch. With the help of the REINFORCE gradient estimator, the obtained deep learning model in the experiments achieves significantly reduced computation time and optimality loss

    The P-ART framework for placement of virtual network services in a multi-cloud environment

    Get PDF
    Carriers’ network services are distributed, dynamic, and investment intensive. Deploying them as virtual network services (VNS) brings the promise of low-cost agile deployments, which reduce time to market new services. If these virtual services are hosted dynamically over multiple clouds, greater flexibility in optimizing performance and cost can be achieved. On the flip side, when orchestrated over multiple clouds, the stringent performance norms for carrier services become difficult to meet, necessitating novel and innovative placement strategies. In selecting the appropriate combination of clouds for placement, it is important to look ahead and visualize the environment that will exist at the time a virtual network service is actually activated. This serves multiple purposes — clouds can be selected to optimize the cost, the chosen performance parameters can be kept within the defined limits, and the speed of placement can be increased. In this paper, we propose the P-ART (Predictive-Adaptive Real Time) framework that relies on predictive-deductive features to achieve these objectives. With so much riding on predictions, we include in our framework a novel concept-drift compensation technique to make the predictions closer to reality by taking care of long-term traffic variations. At the same time, near real-time update of the prediction models takes care of sudden short-term variations. These predictions are then used by a new randomized placement heuristic that carries out a fast cloud selection using a least-cost latency-constrained policy. An empirical analysis carried out using datasets from a queuing-theoretic model and also through implementation on CloudLab, proves the effectiveness of the P-ART framework. The placement system works fast, placing thousands of functions in a sub-minute time frame with a high acceptance ratio, making it suitable for dynamic placement. We expect the framework to be an important step in making the deployment of carrier-grade VNS on multi-cloud systems, using network function virtualization (NFV), a reality

    The P-ART framework for placement of virtual network services in a multi-cloud environment

    Get PDF
    Carriers network services are distributed, dynamic, and investment intensive. Deploying them as virtual network services (VNS) brings the promise of low-cost agile deployments, which reduce time to market new services. If these virtual services are hosted dynamically over multiple clouds, greater flexibility in optimizing performance and cost can be achieved. On the flip side, when orchestrated over multiple clouds, the stringent performance norms for carrier services become difficult to meet, necessitating novel and innovative placement strategies. In selecting the appropriate combination of clouds for placement, it is important to look ahead and visualize the environment that will exist at the time a virtual network service is actually activated. This serves multiple purposes clouds can be selected to optimize the cost, the chosen performance parameters can be kept within the defined limits, and the speed of placement can be increased. In this paper, we propose the P-ART (Predictive-Adaptive Real Time) framework that relies on predictive-deductive features to achieve these objectives. With so much riding on predictions, we include in our framework a novel concept-drift compensation technique to make the predictions closer to reality by taking care of long-term traffic variations. At the same time, near real-time update of the prediction models takes care of sudden short-term variations. These predictions are then used by a new randomized placement heuristic that carries out a fast cloud selection using a least-cost latency-constrained policy. An empirical analysis carried out using datasets from a queuing-theoretic model and also through implementation on CloudLab, proves the effectiveness of the P-ART framework. The placement system works fast, placing thousands of functions in a sub-minute time frame with a high acceptance ratio, making it suitable for dynamic placement. We expect the framework to be an important step in making the deployment of carrier-grade VNS on multi-cloud systems, using network function virtualization (NFV), a reality.This publication was made possible by NPRP grant # 8-634-1-131 from the Qatar National Research Fund (a member of Qatar Foundation), National Science Foundation, USA � CNS-1718929 and National Science Foundation, USA � CNS-1547380 .Scopu

    Orchestration and Scheduling of Resources in Softwarized Networks

    Get PDF
    The Fifth Generation (5G) era is touted as the next generation of mobile networks that will unleash new services and network capabilities, opening up a whole new line of businesses recognized by a top-notch Quality of Service (QoS) and Quality of Experience (QoE) empowered by many recent advancements in network softwarization and providing an innovative on-demand service provisioning on a shared underlying network infrastructure. 5G networks will support the immerse explosion of the Internet of Things (IoT) incurring an expected growth of billions of connected IoT devices by 2020, providing a wide range of services spanning from low-cost sensor-based metering services to low-latency communication services touching health, education and automotive sectors among others. Mobile operators are striving to find a cost effective network solution that will enable them to continuously and automatically upgrade their networks based on their ever growing customers demands in the quest of fulfilling the new rising opportunities of offering novel services empowered by the many emerging IoT devices. Thus, departing from the shortfalls of legacy hardware (i.e., high cost, difficult management and update, etc.) and learning from the different advantages of virtualization technologies which enabled the sharing of computing resources in a cloud environment, mobile operators started to leverage the idea of network softwarization through several emerging technologies. Network Function Virtualization (NFV) promises an ultimate Capital Expenditures (CAPEX) reduction and high flexibility in resource provisioning and service delivery through replacing hardware equipment by software. Software Defined Network (SDN) offers network and mobile operators programmable traffic management and delivery. These technologies will enable the launch of Multi-Access Edge Computing (MEC) paradigm that promises to complete the 5G networks requirements in providing low-latency services by bringing the computing resources to the edge of the network, in close vicinity of the users, hence, assisting the limited capabilities of their IoT devices in delivering their needed services. By leveraging network softwarization, these technologies will initiate a tremendous re-design of current networks that will be transformed to self-managed, software-based networks exploiting multiple benefits ranging from flexibility, programmability, automation, elasticity among others. This dissertation attempts to elaborate and address key challenges related to enabling the re-design of current networks to support a smooth integration of the NFV and MEC technologies. This thesis provides a profound understanding and novel contributions in resource and service provisioning and scheduling towards enabling efficient resource and network utilization of the underlying infrastructure by leveraging several optimization and game theoretic techniques. In particular, we first, investigate the interplay existing between network function mapping, traffic routing and Network Service (NS) scheduling in NFV-based networks and present a Column Generation (CG) decomposition method to solve the problem with considerable runtime improvement over mathematical-based formulations. Given the increasing interest in providing low-latency services and the correlation existing between this objective and the goal of network operators in maximizing their network admissibility through efficiently utilizing their network resources, we revisit the latter problem and tackle it under different assumptions and objectives. Given its complexity, we present a novel game theoretic approach that is able to provide a bounded solution of the problem. Further, we extend our work to the network edge where we promote network elasticity and alleviate virtualization technologies by addressing the problem of task offloading and scheduling along with the IoT application resource allocation problem. Given the complexity of the problem, we propose a Logic-Based Benders (LBBD) decomposition method to efficiently solve it to optimality

    Allocation des ressources dans les environnements informatiques en périphérie des réseaux mobiles

    Get PDF
    Abstract: The evolution of information technology is increasing the diversity of connected devices and leading to the expansion of new application areas. These applications require ultra-low latency, which cannot be achieved by legacy cloud infrastructures given their distance from users. By placing resources closer to users, the recently developed edge computing paradigm aims to meet the needs of these applications. Edge computing is inspired by cloud computing and extends it to the edge of the network, in proximity to where the data is generated. This paradigm leverages the proximity between the processing infrastructure and the users to ensure ultra-low latency and high data throughput. The aim of this thesis is to improve resource allocation at the network edge to provide an improved quality of service and experience for low-latency applications. For better resource allocation, it is necessary to have reliable knowledge about the resources available at any moment. The first contribution of this thesis is to propose a resource representation to allow the supervisory xentity to acquire information about the resources available to each device. This information is then used by the resource allocation scheme to allocate resources appropriately for the different services. The resource allocation scheme is based on Lyapunov optimization, and it is executed only when resource allocation is required, which reduces the latency and resource consumption on each edge device. The second contribution of this thesis focuses on resource allocation for edge services. The services are created by chaining a set of virtual network functions. Resource allocation for services consists of finding an adequate placement for, routing, and scheduling these virtual network functions. We propose a solution based on game theory and machine learning to find a suitable location and routing for as well as an appropriate scheduling of these functions at the network edge. Finding the location and routing of network functions is formulated as a mean field game solved by iterative Ishikawa-Mann learning. In addition, the scheduling of the network functions on the different edge nodes is formulated as a matching set, which is solved using an improved version of the deferred acceleration algorithm we propose. The third contribution of this thesis is the resource allocation for vehicular services at the edge of the network. In this contribution, the services are migrated and moved to the different infrastructures at the edge to ensure service continuity. Vehicular services are particularly delay sensitive and related mainly to road safety and security. Therefore, the migration of vehicular services is a complex operation. We propose an approach based on deep reinforcement learning to proactively migrate the different services while ensuring their continuity under high mobility constraints.L'évolution des technologies de l'information entraîne la prolifération des dispositifs connectés qui mène à l'exploration de nouveaux champs d'application. Ces applications demandent une latence ultra-faible, qui ne peut être atteinte par les infrastructures en nuage traditionnelles étant donné la distance qui les sépare des utilisateurs. En rapprochant les ressources aux utilisateurs, le paradigme de l'informatique en périphérie, récemment apparu, vise à répondre aux besoins de ces applications. L’informatique en périphérie s'inspire de l’informatique en nuage, en l'étendant à la périphérie du réseau, à proximité de l'endroit où les données sont générées. Ce paradigme tire parti de la proximité entre l'infrastructure de traitement et les utilisateurs pour garantir une latence ultra-faible et un débit élevé des données. L'objectif de cette thèse est l'amélioration de l'allocation des ressources à la périphérie du réseau pour offrir une meilleure qualité de service et expérience pour les applications à faible latence. Pour une meilleure allocation des ressources, il est nécessaire d'avoir une bonne connaissance sur les ressources disponibles à tout moment. La première contribution de cette thèse consiste en la proposition d'une représentation des ressources pour permettre à l'entité de supervision d'acquérir des informations sur les ressources disponibles à chaque dispositif. Ces informations sont ensuite exploitées par le schéma d'allocation des ressources afin d'allouer les ressources de manière appropriée pour les différents services. Le schéma d'allocation des ressources est basé sur l'optimisation de Lyapunov, et il n'est exécuté que lorsque l'allocation des ressources est requise, ce qui réduit la latence et la consommation en ressources sur chaque équipement de périphérie. La deuxième contribution de cette thèse porte sur l'allocation des ressources pour les services en périphérie. Les services sont composés par le chaînage d'un ensemble de fonctions réseau virtuelles. L'allocation des ressources pour les services consiste en la recherche d'un placement, d'un routage et d'un ordonnancement adéquat de ces fonctions réseau virtuelles. Nous proposons une solution basée sur la théorie des jeux et sur l'apprentissage automatique pour trouver un emplacement et routage convenable ainsi qu'un ordonnancement approprié de ces fonctions en périphérie du réseau. La troisième contribution de cette thèse consiste en l'allocation des ressources pour les services véhiculaires en périphérie du réseau. Dans cette contribution, les services sont migrés et déplacés sur les différentes infrastructures en périphérie pour assurer la continuité des services. Les services véhiculaires sont en particulier sensibles à la latence et liés principalement à la sûreté et à la sécurité routière. En conséquence, la migration des services véhiculaires constitue une opération complexe. Nous proposons une approche basée sur l'apprentissage par renforcement profond pour migrer de manière proactive les différents services tout en assurant leur continuité sous les contraintes de mobilité élevée

    Failure Analysis in Next-Generation Critical Cellular Communication Infrastructures

    Full text link
    The advent of communication technologies marks a transformative phase in critical infrastructure construction, where the meticulous analysis of failures becomes paramount in achieving the fundamental objectives of continuity, security, and availability. This survey enriches the discourse on failures, failure analysis, and countermeasures in the context of the next-generation critical communication infrastructures. Through an exhaustive examination of existing literature, we discern and categorize prominent research orientations with focuses on, namely resource depletion, security vulnerabilities, and system availability concerns. We also analyze constructive countermeasures tailored to address identified failure scenarios and their prevention. Furthermore, the survey emphasizes the imperative for standardization in addressing failures related to Artificial Intelligence (AI) within the ambit of the sixth-generation (6G) networks, accounting for the forward-looking perspective for the envisioned intelligence of 6G network architecture. By identifying new challenges and delineating future research directions, this survey can help guide stakeholders toward unexplored territories, fostering innovation and resilience in critical communication infrastructure development and failure prevention
    corecore