761 research outputs found

    Random Linear Network Coding for 5G Mobile Video Delivery

    Get PDF
    An exponential increase in mobile video delivery will continue with the demand for higher resolution, multi-view and large-scale multicast video services. Novel fifth generation (5G) 3GPP New Radio (NR) standard will bring a number of new opportunities for optimizing video delivery across both 5G core and radio access networks. One of the promising approaches for video quality adaptation, throughput enhancement and erasure protection is the use of packet-level random linear network coding (RLNC). In this review paper, we discuss the integration of RLNC into the 5G NR standard, building upon the ideas and opportunities identified in 4G LTE. We explicitly identify and discuss in detail novel 5G NR features that provide support for RLNC-based video delivery in 5G, thus pointing out to the promising avenues for future research.Comment: Invited paper for Special Issue "Network and Rateless Coding for Video Streaming" - MDPI Informatio

    Will SDN be part of 5G?

    Get PDF
    For many, this is no longer a valid question and the case is considered settled with SDN/NFV (Software Defined Networking/Network Function Virtualization) providing the inevitable innovation enablers solving many outstanding management issues regarding 5G. However, given the monumental task of softwarization of radio access network (RAN) while 5G is just around the corner and some companies have started unveiling their 5G equipment already, the concern is very realistic that we may only see some point solutions involving SDN technology instead of a fully SDN-enabled RAN. This survey paper identifies all important obstacles in the way and looks at the state of the art of the relevant solutions. This survey is different from the previous surveys on SDN-based RAN as it focuses on the salient problems and discusses solutions proposed within and outside SDN literature. Our main focus is on fronthaul, backward compatibility, supposedly disruptive nature of SDN deployment, business cases and monetization of SDN related upgrades, latency of general purpose processors (GPP), and additional security vulnerabilities, softwarization brings along to the RAN. We have also provided a summary of the architectural developments in SDN-based RAN landscape as not all work can be covered under the focused issues. This paper provides a comprehensive survey on the state of the art of SDN-based RAN and clearly points out the gaps in the technology.Comment: 33 pages, 10 figure

    UAV-Empowered Disaster-Resilient Edge Architecture for Delay-Sensitive Communication

    Full text link
    The fifth-generation (5G) communication systems will enable enhanced mobile broadband, ultra-reliable low latency, and massive connectivity services. The broadband and low-latency services are indispensable to public safety (PS) communication during natural or man-made disasters. Recently, the third generation partnership project long term evolution (3GPPLTE) has emerged as a promising candidate to enable broadband PS communications. In this article, first we present six major PS-LTE enabling services and the current status of PS-LTE in 3GPP releases. Then, we discuss the spectrum bands allocated for PS-LTE in major countries by international telecommunication union (ITU). Finally, we propose a disaster resilient three-layered architecture for PS-LTE (DR-PSLTE). This architecture consists of a software-defined network (SDN) layer to provide centralized control, an unmanned air vehicle (UAV) cloudlet layer to facilitate edge computing or to enable emergency communication link, and a radio access layer. The proposed architecture is flexible and combines the benefits of SDNs and edge computing to efficiently meet the delay requirements of various PS-LTE services. Numerical results verified that under the proposed DR-PSLTE architecture, delay is reduced by 20% as compared with the conventional centralized computing architecture.Comment: 9,

    Design and Experimental Validation of a Software-Defined Radio Access Network Testbed with Slicing Support

    Get PDF
    Network slicing is a fundamental feature of 5G systems to partition a single network into a number of segregated logical networks, each optimized for a particular type of service, or dedicated to a particular customer or application. The realization of network slicing is particularly challenging in the Radio Access Network (RAN) part, where multiple slices can be multiplexed over the same radio channel and Radio Resource Management (RRM) functions shall be used to split the cell radio resources and achieve the expected behaviour per slice. In this context, this paper describes the key design and implementation aspects of a Software-Defined RAN (SD-RAN) experimental testbed with slicing support. The testbed has been designed consistently with the slicing capabilities and related management framework established by 3GPP in Release 15. The testbed is used to demonstrate the provisioning of RAN slices (e.g. preparation, commissioning and activation phases) and the operation of the implemented RRM functionality for slice-aware admission control and scheduling

    An Innovative RAN Architecture for Emerging Heterogeneous Networks: The Road to the 5G Era

    Full text link
    The global demand for mobile-broadband data services has experienced phenomenal growth over the last few years, driven by the rapid proliferation of smart devices such as smartphones and tablets. This growth is expected to continue unabated as mobile data traffic is predicted to grow anywhere from 20 to 50 times over the next 5 years. Exacerbating the problem is that such unprecedented surge in smartphones usage, which is characterized by frequent short on/off connections and mobility, generates heavy signaling traffic load in the network signaling storms . This consumes a disproportion amount of network resources, compromising network throughput and efficiency, and in extreme cases can cause the Third-Generation (3G) or 4G (long-term evolution (LTE) and LTE-Advanced (LTE-A)) cellular networks to crash. As the conventional approaches of improving the spectral efficiency and/or allocation additional spectrum are fast approaching their theoretical limits, there is a growing consensus that current 3G and 4G (LTE/LTE-A) cellular radio access technologies (RATs) won\u27t be able to meet the anticipated growth in mobile traffic demand. To address these challenges, the wireless industry and standardization bodies have initiated a roadmap for transition from 4G to 5G cellular technology with a key objective to increase capacity by 1000Ã? by 2020 . Even though the technology hasn\u27t been invented yet, the hype around 5G networks has begun to bubble. The emerging consensus is that 5G is not a single technology, but rather a synergistic collection of interworking technical innovations and solutions that collectively address the challenge of traffic growth. The core emerging ingredients that are widely considered the key enabling technologies to realize the envisioned 5G era, listed in the order of importance, are: 1) Heterogeneous networks (HetNets); 2) flexible backhauling; 3) efficient traffic offload techniques; and 4) Self Organizing Networks (SONs). The anticipated solutions delivered by efficient interworking/ integration of these enabling technologies are not simply about throwing more resources and /or spectrum at the challenge. The envisioned solution, however, requires radically different cellular RAN and mobile core architectures that efficiently and cost-effectively deploy and manage radio resources as well as offload mobile traffic from the overloaded core network. The main objective of this thesis is to address the key techno-economics challenges facing the transition from current Fourth-Generation (4G) cellular technology to the 5G era in the context of proposing a novel high-risk revolutionary direction to the design and implementation of the envisioned 5G cellular networks. The ultimate goal is to explore the potential and viability of cost-effectively implementing the 1000x capacity challenge while continuing to provide adequate mobile broadband experience to users. Specifically, this work proposes and devises a novel PON-based HetNet mobile backhaul RAN architecture that: 1) holistically addresses the key techno-economics hurdles facing the implementation of the envisioned 5G cellular technology, specifically, the backhauling and signaling challenges; and 2) enables, for the first time to the best of our knowledge, the support of efficient ground-breaking mobile data and signaling offload techniques, which significantly enhance the performance of both the HetNet-based RAN and LTE-A\u27s core network (Evolved Packet Core (EPC) per 3GPP standard), ensure that core network equipment is used more productively, and moderate the evolving 5G\u27s signaling growth and optimize its impact. To address the backhauling challenge, we propose a cost-effective fiber-based small cell backhaul infrastructure, which leverages existing fibered and powered facilities associated with a PON-based fiber-to-the-Node/Home (FTTN/FTTH)) residential access network. Due to the sharing of existing valuable fiber assets, the proposed PON-based backhaul architecture, in which the small cells are collocated with existing FTTN remote terminals (optical network units (ONUs)), is much more economical than conventional point-to-point (PTP) fiber backhaul designs. A fully distributed ring-based EPON architecture is utilized here as the fiber-based HetNet backhaul. The techno-economics merits of utilizing the proposed PON-based FTTx access HetNet RAN architecture versus that of traditional 4G LTE-A\u27s RAN will be thoroughly examined and quantified. Specifically, we quantify the techno-economics merits of the proposed PON-based HetNet backhaul by comparing its performance versus that of a conventional fiber-based PTP backhaul architecture as a benchmark. It is shown that the purposely selected ring-based PON architecture along with the supporting distributed control plane enable the proposed PON-based FTTx RAN architecture to support several key salient networking features that collectively significantly enhance the overall performance of both the HetNet-based RAN and 4G LTE-A\u27s core (EPC) compared to that of the typical fiber-based PTP backhaul architecture in terms of handoff capability, signaling overhead, overall network throughput and latency, and QoS support. It will also been shown that the proposed HetNet-based RAN architecture is not only capable of providing the typical macro-cell offloading gain (RAN gain) but also can provide ground-breaking EPC offloading gain. The simulation results indicate that the overall capacity of the proposed HetNet scales with the number of deployed small cells, thanks to LTE-A\u27s advanced interference management techniques. For example, if there are 10 deployed outdoor small cells for every macrocell in the network, then the overall capacity will be approximately 10-11x capacity gain over a macro-only network. To reach the 1000x capacity goal, numerous small cells including 3G, 4G, and WiFi (femtos, picos, metros, relays, remote radio heads, distributed antenna systems) need to be deployed indoors and outdoors, at all possible venues (residences and enterprises)

    Design and experimental validation of a software-defined radio access network testbed with slicing support

    Get PDF
    Network slicing is a fundamental feature of 5G systems to partition a single network into a number of segregated logical networks, each optimized for a particular type of service or dedicated to a particular customer or application. The realization of network slicing is particularly challenging in the Radio Access Network (RAN) part, where multiple slices can be multiplexed over the same radio channel and Radio Resource Management (RRM) functions shall be used to split the cell radio resources and achieve the expected behaviour per slice. In this context, this paper describes the key design and implementation aspects of a Software-Defined RAN (SD-RAN) experimental testbed with slicing support. The testbed has been designed consistently with the slicing capabilities and related management framework established by 3GPP in Release 15. The testbed is used to demonstrate the provisioning of RAN slices (e.g., preparation, commissioning, and activation phases) and the operation of the implemented RRM functionality for slice-aware admission control and scheduling.Peer ReviewedPostprint (published version

    Integration of LoRa Wide Area Network with the 5G Test Network

    Get PDF
    Abstract. The global communication network is going through major transformation from conventional to more versatile and diversified network approaches. With the advent of virtualization and cloud technology, information technology (IT) is merging with telecommunications to alter the conventional approaches of traditional proprietary networking techniques. From radio to network and applications, the existing infrastructure lacks several features that we wished to be part of 5th Generation Mobile Networks (5G). Having a support for large number of applications, Internet of Things (IoT) will bring a major evolution by creating a comfortable, flexible and an automated environment for end users. A network having the capability to support radio protocols on top of basic networking protocols, when blended with a platform which can generate IoT use cases, can make the expectations of 5G a reality. Low Power Wide Area Network (LPWAN) technologies can be utilized with other emerging and suitable technologies for IoT applications. To implement a network where all the technologies can be deployed virtually to serve their applications within a single cloud, Network Functions Virtualization (NFV) and Software Defined Network (SDN) is introduced to implement such a networking possibility for upcoming technologies. The 5G Test Network (5GTN), a testbed for implementing and testing 5G features in real time, is deployed in virtual platform which allows to add other technologies for IoT applications. To implement a network with an IoT enabler technology, LoRa Wide Area Network (LoRaWAN) technology can be integrated to test the feasibility and capability of IoT implications. LoRaWAN being an IoT enabler technology is chosen out of several possibilities to be integrated with the 5GTN. Using MultiConnect Conduit as a gateway, the integration is realized by establishing point to point protocol (PPP) connection with eNodeB. Once the connection is established, LoRa packets are forwarded to the ThingWorx IoT cloud and responses can be received by the end-devices from that IoT cloud by using Message Queuing Telemetry Transport (MQTT) protocol. Wireshark, an open source packet analyser, is then used to ensure successful transmission of packets to the ThingWorx using the 5GTN default packet routes

    End-to-end Mobile Network Slicing

    Get PDF
    Wireless networks have gone through several years of evolution until now and will continue to do so in order to cater for the varying needs of its users. These demands are expected to continue to grow even more in the future, both in size and variability. Hence, the 5G technology needs to consider these variabilities in service demands and potential data explosion which could accompany users’ demands at the core of its architecture. For 5G mobile network to handle these foreseen challenges, network slicing \cite{c13} is seen as a potential path to tread as its standardization is progressing. In light of the proposed 5G network architecture and to support and end-to-end mobile network slicing, we implemented radio access network (RAN) slicing over a virtualized evolved Node B (eNodeB) and ensured multiple core network slices could communicate through it successfully. Our results, challenges and further research path are presented in this thesis report

    Improved handover signaling for 5G networks

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Mobility management is a critical component for any new wireless standard to be ubiquitous. While 4G-LTE and prior wireless standards utilized vendor specific hardware and software on which mobility management (MM) functionality was implemented, recent 5G architecture releases by 3GPP indicate a complete departure from the same. 3GPP in release 14 and the upcoming release 15 has stressed upon the utilization of Software Defined Networking (SDN) and Network Function Virtualization (NFV) as the drivers of 5G technology. Consequently, new challenges related to MM and specifically handover management will be encountered owing to the inter-working setup between the 5G Next-Gen Core (NGC) and the Evolved Packet System (EPS) core. In this paper, we exploit the SDN to enhance the signaling of the HO methods proposed by 3GPP. Although the proposed approach can be applied to any HO method, in this paper we specifically evaluate the scenario wherein a dedicated interface between the Mobility Management Entity (MME) in the EPC and the Access and Mobility management Function (AMF) in the 5G NGC, i.e., N26 as specified by 3GPP, is non-existent. Such a scenario is reasonable during the initial deployment phases of 5G networks. We show that the proposed mechanism is efficient as compared to the 3GPP handover strategy in terms of latency, transmission and processing costs.Peer ReviewedPostprint (author's final draft

    Performance Comparison of Dual Connectivity and Hard Handover for LTE-5G Tight Integration in mmWave Cellular Networks

    Get PDF
    MmWave communications are expected to play a major role in the Fifth generation of mobile networks. They offer a potential multi-gigabit throughput and an ultra-low radio latency, but at the same time suffer from high isotropic pathloss, and a coverage area much smaller than the one of LTE macrocells. In order to address these issues, highly directional beamforming and a very high-density deployment of mmWave base stations were proposed. This Thesis aims to improve the reliability and performance of the 5G network by studying its tight and seamless integration with the current LTE cellular network. In particular, the LTE base stations can provide a coverage layer for 5G mobile terminals, because they operate on microWave frequencies, which are less sensitive to blockage and have a lower pathloss. This document is a copy of the Master's Thesis carried out by Mr. Michele Polese under the supervision of Dr. Marco Mezzavilla and Prof. Michele Zorzi. It will propose an LTE-5G tight integration architecture, based on mobile terminals' dual connectivity to LTE and 5G radio access networks, and will evaluate which are the new network procedures that will be needed to support it. Moreover, this new architecture will be implemented in the ns-3 simulator, and a thorough simulation campaign will be conducted in order to evaluate its performance, with respect to the baseline of handover between LTE and 5G.Comment: Master's Thesis carried out by Mr. Michele Polese under the supervision of Dr. Marco Mezzavilla and Prof. Michele Zorz
    • …
    corecore