65 research outputs found

    Design, implementation and experimental evaluation of a network-slicing aware mobile protocol stack

    Get PDF
    Mención Internacional en el título de doctorWith the arrival of new generation mobile networks, we currently observe a paradigm shift, where monolithic network functions running on dedicated hardware are now implemented as software pieces that can be virtualized on general purpose hardware platforms. This paradigm shift stands on the softwarization of network functions and the adoption of virtualization techniques. Network Function Virtualization (NFV) comprises softwarization of network elements and virtualization of these components. It brings multiple advantages: (i) Flexibility, allowing an easy management of the virtual network functions (VNFs) (deploy, start, stop or update); (ii) efficiency, resources can be adequately consumed due to the increased flexibility of the network infrastructure; and (iii) reduced costs, due to the ability of sharing hardware resources. To this end, multiple challenges must be addressed to effectively leverage of all these benefits. Network Function Virtualization envisioned the concept of virtual network, resulting in a key enabler of 5G networks flexibility, Network Slicing. This new paradigm represents a new way to operate mobile networks where the underlying infrastructure is "sliced" into logically separated networks that can be customized to the specific needs of the tenant. This approach also enables the ability of instantiate VNFs at different locations of the infrastructure, choosing their optimal placement based on parameters such as the requirements of the service traversing the slice or the available resources. This decision process is called orchestration and involves all the VNFs withing the same network slice. The orchestrator is the entity in charge of managing network slices. Hands-on experiments on network slicing are essential to understand its benefits and limits, and to validate the design and deployment choices. While some network slicing prototypes have been built for Radio Access Networks (RANs), leveraging on the wide availability of radio hardware and open-source software, there is no currently open-source suite for end-to-end network slicing available to the research community. Similarly, orchestration mechanisms must be evaluated as well to properly validate theoretical solutions addressing diverse aspects such as resource assignment or service composition. This thesis contributes on the study of the mobile networks evolution regarding its softwarization and cloudification. We identify software patterns for network function virtualization, including the definition of a novel mobile architecture that squeezes the virtualization architecture by splitting functionality in atomic functions. Then, we effectively design, implement and evaluate of an open-source network slicing implementation. Our results show a per-slice customization without paying the price in terms of performance, also providing a slicing implementation to the research community. Moreover, we propose a framework to flexibly re-orchestrate a virtualized network, allowing on-the-fly re-orchestration without disrupting ongoing services. This framework can greatly improve performance under changing conditions. We evaluate the resulting performance in a realistic network slicing setup, showing the feasibility and advantages of flexible re-orchestration. Lastly and following the required re-design of network functions envisioned during the study of the evolution of mobile networks, we present a novel pipeline architecture specifically engineered for 4G/5G Physical Layers virtualized over clouds. The proposed design follows two objectives, resiliency upon unpredictable computing and parallelization to increase efficiency in multi-core clouds. To this end, we employ techniques such as tight deadline control, jitter-absorbing buffers, predictive Hybrid Automatic Repeat Request, and congestion control. Our experimental results show that our cloud-native approach attains > 95% of the theoretical spectrum efficiency in hostile environments where stateof- the-art architectures collapse.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Francisco Valera Pintor.- Secretario: Vincenzo Sciancalepore.- Vocal: Xenofon Fouka

    Multiframe coded computation for distributed uplink channel decoding

    Get PDF
    The latest 5G technology in wireless communication has led to an increasing demand for higher data rates and low latencies. The overall latency of the system in a cloud radio access network is greatly affected by the decoding latency in the uplink channel. Various proposed solutions suggest using network function virtualization (NFV). NFV is the process of decoupling the network functions from hardware appliances. This provides the exibility to implement distributed computing and network coding to effectively reduce the decoding latency and improve the reliability of the system. To ensure the system is cost effective, commercial off the shelf (COTS) devices are used, which are susceptible to random runtimes and server failures. NFV coded computation has shown to provide a significant improvement in straggler mitigation in previous work. This work focuses on reducing the overall decoding time while improving the fault tolerance of the system. The overall latency of the system can be reduced by improving the computation efficiency and processing speed in a distributed communication network. To achieve this, multiframe NFV coded computation is implemented, which exploits the advantage of servers with different runtimes. In multiframe coded computation, each server continues to decode coded frames of the original message until the message is decoded. Individual servers can make up for straggling servers or server failures, increasing the fault tolerance and network recovery time of the system. As a consequence, the overall decoding latency of a message is significantly reduced. This is supported by simulation results, which show the improvement in system performance in comparison to a standard NFV coded system

    Bayesian online learning for energy-aware resource orchestration in virtualized RANs

    Get PDF
    Proceedings of: IEEE International Conference on Computer Communications, 10-13 May 2021, Vancouver, BC, Canada.Radio Access Network Virtualization (vRAN) will spearhead the quest towards supple radio stacks that adapt to heterogeneous infrastructure: from energy-constrained platforms deploying cells-on-wheels (e.g., drones) or battery-powered cells to green edge clouds. We perform an in-depth experimental analysis of the energy consumption of virtualized Base Stations (vBSs) and render two conclusions: (i) characterizing performance and power consumption is intricate as it depends on human behavior such as network load or user mobility; and (ii) there are many control policies and some of them have non-linear and monotonic relations with power and throughput. Driven by our experimental insights, we argue that machine learning holds the key for vBS control. We formulate two problems and two algorithms: (i) BP-vRAN, which uses Bayesian online learning to balance performance and energy consumption, and (ii) SBP-vRAN, which augments our Bayesian optimization approach with safe controls that maximize performance while respecting hard power constraints. We show that our approaches are data-efficient and have provably performance, which is paramount for carrier-grade vRANs. We demonstrate the convergence and flexibility of our approach and assess its performance using an experimental prototype.This work was supported by the European Commission through Grant No. 856709 (5Growth) and Grant No. 101017109 (DAEMON); and by SFI through Grant No. SFI 17/CDA/4760

    Cloud RAN for Mobile Networks - a Technology Overview

    Get PDF
    Cloud Radio Access Network (C-RAN) is a novel mobile network architecture which can address a number of challenges the operators face while trying to support growing end-user’s needs. The main idea behind C-RAN is to pool the Baseband Units (BBUs) from multiple base stations into centralized BBU Pool for statistical multiplexing gain, while shifting the burden to the high-speed wireline transmission of In-phase and Quadrature (IQ) data. C-RAN enables energy efficient network operation and possible cost savings on base- band resources. Furthermore, it improves network capacity by performing load balancing and cooperative processing of signals originating from several base stations. This article surveys the state-of-the-art literature on C-RAN. It can serve as a starting point for anyone willing to understand C-RAN architecture and advance the research on C-RA

    vrAIn: a deep learning approach tailoring computing and radio resources in virtualized RANs

    Get PDF
    Proceeding of: 25th Annual International Conference on Mobile Computing and Networking (MobiCom'19), October 21-25, 2019, Los Cabos, Mexico.The virtualization of radio access networks (vRAN) is the last milestone in the NFV revolution. However, the complex dependencies between computing and radio resources make vRAN resource control particularly daunting. We present vrAIn, a dynamic resource controller for vRANs based on deep reinforcement learning. First, we use an autoencoder to project high-dimensional context data (traffic and signal quality patterns) into a latent representation. Then, we use a deep deterministic policy gradient (DDPG) algorithm based on an actor-critic neural network structure and a classifier to map (encoded) contexts into resource control decisions. We have implemented vrAIn using an open-source LTE stack over different platforms. Our results show that vrAIn successfully derives appropriate compute and radio control actions irrespective of the platform and context: (i) it provides savings in computational capacity of up to 30% over CPU-unaware methods; (ii) it improves the probability of meeting QoS targets by 25% over static allocation policies using similar CPU resources in average; (iii) upon CPU capacity shortage, it improves throughput performance by 25% over state-of-the-art schemes; and (iv) it performs close to optimal policies resulting from an offline oracle. To the best of our knowledge, this is the first work that thoroughly studies the computational behavior of vRANs, and the first approach to a model-free solution that does not need to assume any particular vRAN platform or system conditions.The work of University Carlos III of Madrid was supported by H2020 5GMoNArch project (grant agreement no. 761445) and H2020 5G-TOURS project (grant agreement no. 856950). The work of NEC Laboratories Europe was supported by H2020 5GTRANSFORMER project (grant agreement no. 761536) and 5GROWTH project (grant agreement no. 856709). The work of University of Cartagena was supported by Grant AEI/FEDER TEC2016-76465-C2-1-R (AIM) and Grant FPU14/03701.Publicad

    vrAIn: Deep Learning based Orchestration for Computing and Radio Resources in vRANs

    Get PDF
    In Press / En PrensaThe virtualization of radio access networks (vRAN) is the last milestone in the NFV revolution. However, the complexrelationship between computing and radio dynamics make vRAN resource control particularly daunting. We present vrAIn, a resourceorchestrator for vRANs based on deep reinforcement learning. First, we use an autoencoder to project high-dimensional context data(traffic and channel quality patterns) into a latent representation. Then, we use a deep deterministic policy gradient (DDPG) algorithmbased on an actor-critic neural network structure and a classifier to map contexts into resource control decisions.We have evaluated vrAIn experimentally, using an open-source LTE stack over different platforms, and via simulations over aproduction RAN. Our results show that: (i) vrAIn provides savings in computing capacity of up to 30% over CPU-agnostic methods;(ii) it improves the probability of meeting QoS targets by 25% over static policies; (iii) upon computing capacity under-provisioning,vrAIn improves throughput by 25% over state-of-the-art schemes; and (iv) it performs close to an optimal offline oracle. To ourknowledge, this is the first work that thoroughly studies the computational behavior of vRANs and the first approach to a model-freesolution that does not need to assume any particular platform or context.This work was partially supported by the European Commission through Grant No. 856709 (5Growth) and Grant No. 856950 (5G-TOURS); by Science Foundation Ireland (SFI) through Grant No. 17/CDA/4760; and AEI/FEDER through project AIM under Grant No. TEC2016-76465-C2-1-R. Furthermore, the work is closely related to the EU project DAEMON (Grant No. 101017109)
    • …
    corecore