17 research outputs found

    Understanding the Computational Requirements of Virtualized Baseband Units using a Programmable Cloud Radio Access Network Testbed

    Full text link
    Cloud Radio Access Network (C-RAN) is emerging as a transformative architecture for the next generation of mobile cellular networks. In C-RAN, the Baseband Unit (BBU) is decoupled from the Base Station (BS) and consolidated in a centralized processing center. While the potential benefits of C-RAN have been studied extensively from the theoretical perspective, there are only a few works that address the system implementation issues and characterize the computational requirements of the virtualized BBU. In this paper, a programmable C-RAN testbed is presented where the BBU is virtualized using the OpenAirInterface (OAI) software platform, and the eNodeB and User Equipment (UEs) are implemented using USRP boards. Extensive experiments have been performed in a FDD downlink LTE emulation system to characterize the performance and computing resource consumption of the BBU under various conditions. It is shown that the processing time and CPU utilization of the BBU increase with the channel resources and with the Modulation and Coding Scheme (MCS) index, and that the CPU utilization percentage can be well approximated as a linear increasing function of the maximum downlink data rate. These results provide real-world insights into the characteristics of the BBU in terms of computing resource and power consumption, which may serve as inputs for the design of efficient resource-provisioning and allocation strategies in C-RAN systems.Comment: In Proceedings of the IEEE International Conference on Autonomic Computing (ICAC), July 201

    Performance Analysis of OpenAirInterface System Emulation

    Get PDF
    With the rapid growth of mobile networks, the radio access network becomes more and more costly to deploy, operate, maintain and upgrade. The most effective answer to this problem lies in the centralization and virtualization of the eNodeBs. This solution is known as Cloud RAN and is one of the key topics in the development of fifth generation networks. Within this context OpenAirInterface is a promising emulation tool that can be used for prototyping innovative scheduling algorithms, making the most of the new architecture. In this work we first describe the emulation environment of OpenAirInterface and its scheduling framework and we use it to implement two MAC schedulers. Moreover we validate the above schedulers and we perform a thorough profiling of OpenAirInterface, in terms of both memory occupancy and execution time. Our results show that OpenAirInterface can be effectively used for prototyping scheduling algorithms in emulated LTE networks

    Real-Time Localization Using Software Defined Radio

    Get PDF
    Service providers make use of cost-effective wireless solutions to identify, localize, and possibly track users using their carried MDs to support added services, such as geo-advertisement, security, and management. Indoor and outdoor hotspot areas play a significant role for such services. However, GPS does not work in many of these areas. To solve this problem, service providers leverage available indoor radio technologies, such as WiFi, GSM, and LTE, to identify and localize users. We focus our research on passive services provided by third parties, which are responsible for (i) data acquisition and (ii) processing, and network-based services, where (i) and (ii) are done inside the serving network. For better understanding of parameters that affect indoor localization, we investigate several factors that affect indoor signal propagation for both Bluetooth and WiFi technologies. For GSM-based passive services, we developed first a data acquisition module: a GSM receiver that can overhear GSM uplink messages transmitted by MDs while being invisible. A set of optimizations were made for the receiver components to support wideband capturing of the GSM spectrum while operating in real-time. Processing the wide-spectrum of the GSM is possible using a proposed distributed processing approach over an IP network. Then, to overcome the lack of information about tracked devices’ radio settings, we developed two novel localization algorithms that rely on proximity-based solutions to estimate in real environments devices’ locations. Given the challenging indoor environment on radio signals, such as NLOS reception and multipath propagation, we developed an original algorithm to detect and remove contaminated radio signals before being fed to the localization algorithm. To improve the localization algorithm, we extended our work with a hybrid based approach that uses both WiFi and GSM interfaces to localize users. For network-based services, we used a software implementation of a LTE base station to develop our algorithms, which characterize the indoor environment before applying the localization algorithm. Experiments were conducted without any special hardware, any prior knowledge of the indoor layout or any offline calibration of the system

    Design, implementation and experimental evaluation of a network-slicing aware mobile protocol stack

    Get PDF
    Mención Internacional en el título de doctorWith the arrival of new generation mobile networks, we currently observe a paradigm shift, where monolithic network functions running on dedicated hardware are now implemented as software pieces that can be virtualized on general purpose hardware platforms. This paradigm shift stands on the softwarization of network functions and the adoption of virtualization techniques. Network Function Virtualization (NFV) comprises softwarization of network elements and virtualization of these components. It brings multiple advantages: (i) Flexibility, allowing an easy management of the virtual network functions (VNFs) (deploy, start, stop or update); (ii) efficiency, resources can be adequately consumed due to the increased flexibility of the network infrastructure; and (iii) reduced costs, due to the ability of sharing hardware resources. To this end, multiple challenges must be addressed to effectively leverage of all these benefits. Network Function Virtualization envisioned the concept of virtual network, resulting in a key enabler of 5G networks flexibility, Network Slicing. This new paradigm represents a new way to operate mobile networks where the underlying infrastructure is "sliced" into logically separated networks that can be customized to the specific needs of the tenant. This approach also enables the ability of instantiate VNFs at different locations of the infrastructure, choosing their optimal placement based on parameters such as the requirements of the service traversing the slice or the available resources. This decision process is called orchestration and involves all the VNFs withing the same network slice. The orchestrator is the entity in charge of managing network slices. Hands-on experiments on network slicing are essential to understand its benefits and limits, and to validate the design and deployment choices. While some network slicing prototypes have been built for Radio Access Networks (RANs), leveraging on the wide availability of radio hardware and open-source software, there is no currently open-source suite for end-to-end network slicing available to the research community. Similarly, orchestration mechanisms must be evaluated as well to properly validate theoretical solutions addressing diverse aspects such as resource assignment or service composition. This thesis contributes on the study of the mobile networks evolution regarding its softwarization and cloudification. We identify software patterns for network function virtualization, including the definition of a novel mobile architecture that squeezes the virtualization architecture by splitting functionality in atomic functions. Then, we effectively design, implement and evaluate of an open-source network slicing implementation. Our results show a per-slice customization without paying the price in terms of performance, also providing a slicing implementation to the research community. Moreover, we propose a framework to flexibly re-orchestrate a virtualized network, allowing on-the-fly re-orchestration without disrupting ongoing services. This framework can greatly improve performance under changing conditions. We evaluate the resulting performance in a realistic network slicing setup, showing the feasibility and advantages of flexible re-orchestration. Lastly and following the required re-design of network functions envisioned during the study of the evolution of mobile networks, we present a novel pipeline architecture specifically engineered for 4G/5G Physical Layers virtualized over clouds. The proposed design follows two objectives, resiliency upon unpredictable computing and parallelization to increase efficiency in multi-core clouds. To this end, we employ techniques such as tight deadline control, jitter-absorbing buffers, predictive Hybrid Automatic Repeat Request, and congestion control. Our experimental results show that our cloud-native approach attains > 95% of the theoretical spectrum efficiency in hostile environments where stateof- the-art architectures collapse.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Francisco Valera Pintor.- Secretario: Vincenzo Sciancalepore.- Vocal: Xenofon Fouka

    Enabling Technology and Proof-of-Concept Evaluation for RAN Architectural Migration toward 5G and Beyond Mobile Systems

    Get PDF
    In this paper, we address two major issues regarding architectural migration of radio access network (RAN). Firstly, an overview and explicit interpretation of how different enabling technologies over generations are brought up and coordinated for migration from a distributed, to a centralized, and then to a virtualized RAN for 5G and beyond cellular; and secondly, the proof-of-concept (PoC) evaluation to understand the feasibility of these enabling technologies, are addressed. In doing so, we first give an overview of major enabling technologies and discuss their impact on RAN migration. We then evaluate the PoC of major enabling technologies proposed mainly for 5G CRAN, namely functional split options, TDM-PON systems, and virtualization techniques using a mobile CORD based prototype in LTE systems with ideal fronthauls. PoC experimental results with split options 2 and 5 are presented and compared using TCP and UDP traffic. Experimentally, it is shown that the throughput improvement is significant for TCP as compared to UDP with virtualized BBUs, which are about 30%-40% and 40%-45% higher in mean throughputs respectively in downlink and uplink with split 5 than that with split 2. Finally, we point out the major experimental limitations of PoC and future research directions

    Coded Computation Against Processing Delays for Virtualized Cloud-Based Channel Decoding

    Get PDF
    The uplink of a cloud radio access network architecture is studied in which decoding at the cloud takes place via network function virtualization on commercial off-the-shelf servers. In order to mitigate the impact of straggling decoders in this platform, a novel coding strategy is proposed, whereby the cloud re-encodes the received frames via a linear code before distributing them to the decoding processors. Transmission of a single frame is considered first, and upper bounds on the resulting frame unavailability probability as a function of the decoding latency are derived by assuming a binary symmetric channel for uplink communications. Then, the analysis is extended to account for random frame arrival times. In this case, the trade-off between average decoding latency and the frame error rate is studied for two different queuing policies, whereby the servers carry out per-frame decoding or continuous decoding, respectively. Numerical examples demonstrate that the bounds are useful tools for code design and that coding is instrumental in obtaining a desirable compromise between decoding latency and reliability.Comment: 11 pages and 12 figures, Submitte

    Multiframe coded computation for distributed uplink channel decoding

    Get PDF
    The latest 5G technology in wireless communication has led to an increasing demand for higher data rates and low latencies. The overall latency of the system in a cloud radio access network is greatly affected by the decoding latency in the uplink channel. Various proposed solutions suggest using network function virtualization (NFV). NFV is the process of decoupling the network functions from hardware appliances. This provides the exibility to implement distributed computing and network coding to effectively reduce the decoding latency and improve the reliability of the system. To ensure the system is cost effective, commercial off the shelf (COTS) devices are used, which are susceptible to random runtimes and server failures. NFV coded computation has shown to provide a significant improvement in straggler mitigation in previous work. This work focuses on reducing the overall decoding time while improving the fault tolerance of the system. The overall latency of the system can be reduced by improving the computation efficiency and processing speed in a distributed communication network. To achieve this, multiframe NFV coded computation is implemented, which exploits the advantage of servers with different runtimes. In multiframe coded computation, each server continues to decode coded frames of the original message until the message is decoded. Individual servers can make up for straggling servers or server failures, increasing the fault tolerance and network recovery time of the system. As a consequence, the overall decoding latency of a message is significantly reduced. This is supported by simulation results, which show the improvement in system performance in comparison to a standard NFV coded system

    Increased energy efficiency in LTE networks through reduced early handover

    Get PDF
    “A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of Philosophy”.Long Term Evolution (LTE) is enormously adopted by several mobile operators and has been introduced as a solution to fulfil ever-growing Users (UEs) data requirements in cellular networks. Enlarged data demands engage resource blocks over prolong time interval thus results into more dynamic power consumption at downlink in Basestation. Therefore, realisation of UEs requests come at the cost of increased power consumption which directly affects operator operational expenditures. Moreover, it also contributes in increased CO2 emissions thus leading towards Global Warming. According to research, Global Information and Communication Technology (ICT) systems consume approximately 1200 to 1800 Terawatts per hour of electricity annually. Importantly mobile communication industry is accountable for more than one third of this power consumption in ICT due to increased data requirements, number of UEs and coverage area. Applying these values to global warming, telecommunication is responsible for 0.3 to 0.4 percent of worldwide CO2 emissions. Moreover, user data volume is expected to increase by a factor of 10 every five years which results in 16 to 20 percent increase in associated energy consumption which directly effects our environment by enlarged global warming. This research work focuses on the importance of energy saving in LTE and initially propose bandwidth expansion based energy saving scheme which combines two resource blocks together to form single super RB, thereby resulting in reduced Physical Downlink Control Channel Overhead (PDCCH). Thus, decreased PDCCH overhead helps in reduced dynamic power consumption up to 28 percent. Subsequently, novel reduced early handover (REHO) based idea is proposed and combined with bandwidth expansion to form enhanced energy ii saving scheme. System level simulations are performed to investigate the performance of REHO scheme; it was found that reduced early handover provided around 35% improved energy saving while compared to LTE standard in 3rd Generation Partnership Project (3GPP) based scenario. Since there is a direct relationship between energy consumption, CO2 emissions and vendors operational expenditure (OPEX); due to reduced power consumption and increased energy efficiency, REHO subsequently proven to be a step towards greener communication with lesser CO2 footprint and reduced operational expenditure values. The main idea of REHO lies in the fact that it initiate handovers earlier and turn off freed resource blocks as compare to LTE standard. Therefore, the time difference (Transmission Time Intervals) between REHO based early handover and LTE standard handover is a key component for energy saving achieved, which is estimated through axiom of Euclidean geometry. Moreover, overall system efficiency is investigated through the analysis of numerous performance related parameters in REHO and LTE standard. This led to a key finding being made to guide the vendors about the choice of energy saving in relation to radio link failure and other important parameters
    corecore