93 research outputs found

    Private 5G and its Suitability for Industrial Networking

    Get PDF
    5G was and is still surrounded by many promises and buzzwords, such as the famous 1 ms, real-time, and Ultra-Reliable and Low-Latency Communications (URLLC). This was partly intended to get the attention of vertical industries to become new customers for mobile networks, which shall be deployed in their factories. With the allowance of federal agencies, companies deployed their own private 5G networks to test new use cases enabled by 5G. But what has been missing, apart from all the marketing, is the knowledge of what 5G can really do? Private 5G networks are envisioned to enable new use cases with strict latency requirements, such as robot control. This work has examined in great detail the capabilities of the current 5G Release 15 as private network, and in particular its suitability with regard to time-critical communications. For that, a testbed was designed to measure One-Way Delays (OWDs) and Round-Trip Times (RTTs) with high accuracy. The measurements were conducted in 5G Non-Standalone (NSA) and Standalone (SA) net-works and are the first published results. The evaluation revealed results that were not obvious or identified by previous work. For example, a strong impact of the packet rate on the resulting OWD and RTT was found. It was also found that typically 95% of the SA downlink end-to-end packet delays are in the range of 4 ms to 10 ms, indicating a fairly wide spread of packet delays, with the Inter-Packet Delay Variation (IPDV) between consecutive packets distributed in the millisecond range. Surprisingly, it also seems to matter for the RTT from which direction, i.e. Downlink (DL) or Uplink (UL), a round-trip communication was initiated. Another important factor plays especially the Inter-Arrival Time (IAT) of packets on the RTT distribution. These examples from the results found demonstrate the need to critically examine 5G and any successors in terms of their real-time capabilities. In addition to the end-to-end OWD and RTT, the delays caused by 4G and 5G Core processing has been investigated as well. Current state-of-the-art 4G and 5G Core implementations exhibit long-tailed delay distributions. To overcome such limitations, modern packet processing have been evaluated in terms of their respective tail-latency. The hardware-based solution was able to process packets with deterministic delay, but the software-based solutions also achieved soft real-time results. These results allow the selection of the right technology for use cases depending on their tail-latency requirements. In summary, many insights into the suitability of 5G for time-critical communications were gained from the study of the current 5G Release 15. The measurement framework, analysis methods, and results will inform the further development and refinement of private 5G campus networks for industrial use cases

    Analysis of a contention-based approach over 5G NR for Federated Learning in an Industrial Internet of Things scenario

    Full text link
    The growing interest in new applications involving co-located heterogeneous requirements, such as the Industrial Internet of Things (IIoT) paradigm, poses unprecedented challenges to the uplink wireless transmissions. Dedicated scheduling has been the fundamental approach used by mobile radio systems for uplink transmissions, where the network assigns contention-free resources to users based on buffer-related information. The usage of contention-based transmissions was discussed by the 3rd Generation Partnership Project (3GPP) as an alternative approach for reducing the uplink latency characterizing dedicated scheduling. Nevertheless, the contention-based approach was not considered for standardization in LTE due to limited performance gains. However, 5G NR introduced a different radio frame which could change the performance achievable with a contention-based framework, although this has not yet been evaluated. This paper aims to fill this gap. We present a contention-based design introduced for uplink transmissions in a 5G NR IIoT scenario. We provide an up-to-date analysis via near-product 3GPP-compliant network simulations of the achievable application-level performance with simultaneous Ultra-Reliable Low Latency Communications (URLLC) and Federated Learning (FL) traffic, where the contention-based scheme is applied to the FL traffic. The investigation also involves two separate mechanisms for handling retransmissions of lost or collided transmissions. Numerical results show that, under some conditions, the proposed contention-based design provides benefits over dedicated scheduling when considering FL upload/download times, and does not significantly degrade the performance of URLLC

    Design, implementation and experimental evaluation of a network-slicing aware mobile protocol stack

    Get PDF
    Mención Internacional en el título de doctorWith the arrival of new generation mobile networks, we currently observe a paradigm shift, where monolithic network functions running on dedicated hardware are now implemented as software pieces that can be virtualized on general purpose hardware platforms. This paradigm shift stands on the softwarization of network functions and the adoption of virtualization techniques. Network Function Virtualization (NFV) comprises softwarization of network elements and virtualization of these components. It brings multiple advantages: (i) Flexibility, allowing an easy management of the virtual network functions (VNFs) (deploy, start, stop or update); (ii) efficiency, resources can be adequately consumed due to the increased flexibility of the network infrastructure; and (iii) reduced costs, due to the ability of sharing hardware resources. To this end, multiple challenges must be addressed to effectively leverage of all these benefits. Network Function Virtualization envisioned the concept of virtual network, resulting in a key enabler of 5G networks flexibility, Network Slicing. This new paradigm represents a new way to operate mobile networks where the underlying infrastructure is "sliced" into logically separated networks that can be customized to the specific needs of the tenant. This approach also enables the ability of instantiate VNFs at different locations of the infrastructure, choosing their optimal placement based on parameters such as the requirements of the service traversing the slice or the available resources. This decision process is called orchestration and involves all the VNFs withing the same network slice. The orchestrator is the entity in charge of managing network slices. Hands-on experiments on network slicing are essential to understand its benefits and limits, and to validate the design and deployment choices. While some network slicing prototypes have been built for Radio Access Networks (RANs), leveraging on the wide availability of radio hardware and open-source software, there is no currently open-source suite for end-to-end network slicing available to the research community. Similarly, orchestration mechanisms must be evaluated as well to properly validate theoretical solutions addressing diverse aspects such as resource assignment or service composition. This thesis contributes on the study of the mobile networks evolution regarding its softwarization and cloudification. We identify software patterns for network function virtualization, including the definition of a novel mobile architecture that squeezes the virtualization architecture by splitting functionality in atomic functions. Then, we effectively design, implement and evaluate of an open-source network slicing implementation. Our results show a per-slice customization without paying the price in terms of performance, also providing a slicing implementation to the research community. Moreover, we propose a framework to flexibly re-orchestrate a virtualized network, allowing on-the-fly re-orchestration without disrupting ongoing services. This framework can greatly improve performance under changing conditions. We evaluate the resulting performance in a realistic network slicing setup, showing the feasibility and advantages of flexible re-orchestration. Lastly and following the required re-design of network functions envisioned during the study of the evolution of mobile networks, we present a novel pipeline architecture specifically engineered for 4G/5G Physical Layers virtualized over clouds. The proposed design follows two objectives, resiliency upon unpredictable computing and parallelization to increase efficiency in multi-core clouds. To this end, we employ techniques such as tight deadline control, jitter-absorbing buffers, predictive Hybrid Automatic Repeat Request, and congestion control. Our experimental results show that our cloud-native approach attains > 95% of the theoretical spectrum efficiency in hostile environments where stateof- the-art architectures collapse.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Francisco Valera Pintor.- Secretario: Vincenzo Sciancalepore.- Vocal: Xenofon Fouka

    A Distributed Neural Linear Thompson Sampling Framework to Achieve URLLC in Industrial IoT

    Full text link
    Industrial Internet of Things (IIoT) networks will provide Ultra-Reliable Low-Latency Communication (URLLC) to support critical processes underlying the production chains. However, standard protocols for allocating wireless resources may not optimize the latency-reliability trade-off, especially for uplink communication. For example, centralized grant-based scheduling can ensure almost zero collisions, but introduces delays in the way resources are requested by the User Equipments (UEs) and granted by the gNB. In turn, distributed scheduling (e.g., based on random access), in which UEs autonomously choose the resources for transmission, may lead to potentially many collisions especially when the traffic increases. In this work we propose DIStributed combinatorial NEural linear Thompson Sampling (DISNETS), a novel scheduling framework that combines the best of the two worlds. By leveraging a feedback signal from the gNB and reinforcement learning, the UEs are trained to autonomously optimize their uplink transmissions by selecting the available resources to minimize the number of collisions, without additional message exchange to/from the gNB. DISNETS is a distributed, multi-agent adaptation of the Neural Linear Thompson Sampling (NLTS) algorithm, which has been further extended to admit multiple parallel actions. We demonstrate the superior performance of DISNETS in addressing URLLC in IIoT scenarios compared to other baselines

    Scheduling in 5G networks : Developing a 5G cell capacity simulator.

    Get PDF
    La quinta generación de comunicaciones móviles (5G) se está convirtiendo en una realidad gracias a la nueva tecnología 3GPP (3rd Generation Partnership Project) diseñada para cumplir con una amplia gama de requerimientos. Por un lado, debe poder soportar altas velocidades y servicios de latencia ultra-baja, y por otro lado, debe poder conectar una gran cantidad de dispositivos con requerimientos laxos de ancho de banda y retardo. Esta diversidad de requerimientos de servicio exige un alto grado de flexibilidad en el diseño de la interfaz de radio. Dado que la tecnología LTE (Long Term Evolution) se diseñó originalmente teniendo en cuenta la evolución de los servicios de banda ancha móvil, no proporciona suficiente flexibilidad para multiplexar de manera óptima los diferentes tipos de servicios previstos por 5G. Esto se debe a que no existe una única configuración de interfaz de radio capaz de adaptarse a todos los diferentes requisitos de servicio. Como consecuencia, las redes 5G se están diseñando para admitir diferentes configuraciones de interfaz de radio y mecanismos para multiplexar estos diferentes servicios con diferentes configuraciones en el mismo espectro disponible. Este concepto se conoce como Network Slicing y es una característica clave de 5G que debe ser soportada extremo a extremo en la red (acceso, transporte y núcleo). De esta manera, las Redes de Acceso (RAN) 5G agregarán el problema de asignación de recursos para diferentes servicios al problema tradicional de asignación de recursos a distintos usuarios. En este contexto, como el estándar no describe cómo debe ser la asignación de recursos para usuarios y servicios (quedando libre a la implementación de los proveedores) se abre un amplio campo de investigación. Se han desarrollado diferentes herramientas de simulación con fines de investigación durante los últimos años. Sin embargo, como no muchas de estas son libres, fáciles de usar y particularmente ninguna de las disponibles soporta Network Slicing a nivel de Red de Acceso, este trabajo presenta un nuevo simulador como principal contribución. Py5cheSim es un simulador simple, flexible y de código abierto basado en Python y especialmente orientado a probar diferentes algoritmos de scheduling para diferentes tipos de servicios 5G mediante una implementación simple de la funcionalidad RAN Slicing. Su arquitectura permite desarrollar e integrar nuevos algoritmos para asignación de recursos de forma sencilla y directa. Además, el uso de Python proporciona suficiente versatilidad para incluso utilizar herramientas de Inteligencia Artificial para el desarrollo de nuevos algoritmos. Este trabajo presenta los principales conceptos de diseño de las redes de acceso 5G que se tomaron como base para desarrollar la herramienta de simulación. También describe decisiones de diseño e implementación, seguidas de las pruebas de validación ejecutadas y sus principales resultados. Se presentan además algunos ejemplos de casos de uso para mostrar el potencial de la herramienta desarrollada, proporcionando un análisis primario de los algoritmos tradicionales de asignación de recursos para los nuevos tipos de servicios previstos por la tecnología. Finalmente se concluye sobre la contribución de la herramienta desarrollada, los resultados de los ejemplos incluyendo posibles líneas de investigación junto con posibles mejoras para futuras versiones.The fifth generation of mobile communications (5G) is already becoming a reality by the new 3GPP (3rd Generation Partnership Project) technology designed to solve a wide range of requirements. On the one hand, it must be able to support high bit rates and ultra-low latency services, and on the other hand, it should be able to connect a massive amount of devices with loose band width and delay requirements. Such diversity in terms of service requirements demands a high degree of flexibility in radio interface design. As LTE (Long Term Evolution) technology was originally designed with Mobile Broadband (MBB) services evolution in mind it does not provide enough flexibility to multiplex optimally the different types of services envisioned by 5G. This is because there is not a unique radio interface configuration able to fit all the different service requirements. As a consequence, 5G networks are being designed to support different radio interface configurations and mechanisms to multiplex these different services with different configurations in the same available spectrum. This concept is known as Network Slicing, and isa 5G key feature which needs to be supported end to end in the network (Radio Access, Transport and Core Network). In this way 5G Radio Access Networks (RAN) will add the resource allocation for different services problem to the user resource allocation traditional one. In this context, as both users and services scheduling is being left to vendor implementation by the standard, an extensive field of research is open. Different simulation tools have been developed for research purposes during the last years. However, as not so many of them are free, easy to use, and particularly none of the available ones supports Network Slicing at RAN level, this work presents a new simulator as its main contribution. Py5cheSim is a simple, flexible and open-source simulator based on Pythonand specially oriented to test different scheduling algorithms for 5G different types of services through a simple implementation of RAN Slicing feature. Its architecture allows to develop and integrate new scheduling algorithms in a easy and straight forward way. Furthermore, the use of Python provides enough versatility to even use Machine Learning tools for the development of new scheduling algorithms. The present work introduces the main 5G RAN design concepts which were taken as a baseline to develop the simulation tool. It also describes its design and implementation choices followed by the executed validation tests and its main results. Additionally this work presents a few use cases examples to show the developed tool’s potential providing a primary analysis of traditional scheduling algorithms for the new types of services envisioned by the technology. Finally it concludes about the developed tool contribution, the example results along with possible research lines and future versions improvements

    An Analytical Latency Model and Evaluation of the Capacity of 5G NR to Support V2X Services Using V2N2V Communications

    Get PDF
    5G has been designed to support applications such as connected and automated driving. To this aim, 5G includes a highly flexible New Radio (NR) interface that can be configured to utilize different subcarrier spacings (SCS), slot durations, scheduling, and retransmissions mechanisms. This flexibility can be exploited to support advanced V2X services with strict latency and reliability requirements using V2N2V (Vehicle-to-Network-to-Vehicles) communications instead of direct or sidelink V2V (Vehicle-to-Vehicle). To analyze this possibility, this paper presents a novel analytical model that estimates the latency of 5G at the radio network level. The model accounts for the use of different numerologies (SCS, slot durations and Cyclic Prefixes), modulation and coding schemes, full-slots or mini-slots, semi-static and dynamic scheduling, different retransmission mechanisms, and broadcast/multicast or unicast transmissions. The model has been used to first analyze the impact of different 5G NR radio configurations on the latency. We then identify which radio configurations and scenarios can 5G NR satisfy the latency and reliability requirements of V2X services using V2N2V communications. This paper considers cooperative lane changes as a case study. The results show that 5G can support advanced V2X services at the radio network level using V2N2V communications under certain conditions that depend on the radio configuration, bandwidth, service requirements and cell traffic load

    Radio resource allocation for overlay D2D-based vehicular communications in future wireless networks

    Get PDF
    Mobilfunknetze der nächsten Generation ermöglichen einen weitverbreiteten Einsatz von Device-to-Device Kommunikation, der direkten Kommunikation zwischen zellularen Endgeräten. Für viele Anwendungsfälle zur direkten Kommunikation zwischen Endgeräten sind eine deterministische Latenz und die hohe Zuverlässigkeit von zentraler Bedeutung. Dienste zur direkten Kommunikation (D2D) für in der Nähe befindliche Endgeräte sind vielversprechend die hohen Anforderungen an Latenz und Zuverlässigkeit für zukünftige vertikale Anwendungen zu erfüllen. Eine der herausragenden vertikalen Anwendungen ist die Fahrzeugkommunikation, bei der die Fahrzeuge sicherheitskritische Meldungen direkt über D2D-Kommunikation austauschen, die dadurch zur Reduktion von Verkehrsunfällen und gleichzeitig von Todesfällen im Straßenverkehrt beiträgt. Neue Techniken zur effizienteren Zuweisung von Funkressourcen in der D2D-Kommunikation haben in letzter Zeit in Industrie und Wissenschaft große Aufmerksamkeit erlangt. Zusätzlich zur Allokation von Ressourcen, wird die Energieeffizienz zunehmend wichtiger, die normalerweise im Zusammenhang mit der Ressourcenallokation behandelt wird. Diese Dissertation untersucht verschiedener Ansätze der Funkressourcenzuweisung und Energieeffizienztechniken in der LTE und NR V2X Kommunikation. Im Folgenden beschreiben wir kurz die Kernideen der Dissertation. Meist zeichnen sich D2D-Anwendungen durch ein relativ geringes Datenvolumen aus, die über Funkressourcen übertragen werden. In LTE können diese Funkressourcen aufgrund der groben Granularität für die Ressourcenzuweisung nicht effizient genutzt werden. Insbesondere beim semi-persistenten Scheduling, bei dem eine Funkressource über einen längeren Zeitraum im Overlay D2D festgelegt wird, sind die Funkressourcen für solche Anwendungen nicht ausgelastet. Um dieses Problem zu lösen, wird eine hierarchische Form für das Management der Funkressourcen, ein sogenanntes Subgranting-Schema, vorgeschlagen. Dabei kann ein nahegelegener zellularer Nutzer, der sogenannte begünstigte Nutzer, ungenutzten Funkressourcen, die durch Subgranting-Signalisierung angezeigt werden, wiederzuverwenden. Das vorgeschlagene Schema wird bewertet und mit "shortening TTI", einen Schema mit reduzierten Sendezeitintervallen, in Bezug auf den Zellendurchsatz verglichen. Als nächster Schritt wird untersucht, wie der begünstigten Benutzer ausgewählt werden kann und als Maximierungsproblem des Zellendurchsatzes im Uplink unter Berücksichtigung von Zuverlässigkeits- und Latenzanforderungen dargestellt. Dafür wird ein heuristischer zentralisierter, d.h. dedizierter Sub-Granting-Radio-Ressource DSGRR-Algorithmus vorgeschlagen. Die Simulationsergebnisse und die Analyse ergeben in einem Szenario mit stationären Nutzern eine Erhöhung des Zelldurchsatzes bei dem Einsatz des vorgeschlagenen DSGRR-Algorithmus im Vergleich zu einer zufälligen Auswahl von Nutzern. Zusätzlich wird das Problem der Auswahl des begünstigten Nutzers in einem dynamischen Szenario untersucht, in dem sich alle Nutzer bewegen. Wir bewerten den durch das Sub-Granting durch die Mobilität entstandenen Signalisierungs-Overhead im DSGRR. Anschließend wird ein verteilter Heuristik-Algorithmus (OSGRR) vorgeschlagen und sowohl mit den Ergebnissen des DSGRR-Algorithmus als auch mit den Ergebnissen ohne Sub-Granting verglichen. Die Simulationsergebnisse zeigen einen verbesserten Zellendurchsatz für den OSGRR im Vergleich zu den anderen Algorithmen. Außerdem ist zu beobachten, dass der durch den OSGRR entstehende Overhead geringer ist als der durch den DSGRR, während der erreichte Zellendurchsatz nahe am maximal erreichbaren Uplink-Zellendurchsatz liegt. Zusätzlich wird die Ressourcenallokation im Zusammenhang mit der Energieeffizienz bei autonomer Ressourcenauswahl in New Radio (NR) Mode 2 untersucht. Die autonome Auswahl der Ressourcen wird als Verhältnis von Summenrate und Energieverbrauch formuliert. Das Ziel ist den Stromverbrauch der akkubetriebenen Endgeräte unter Berücksichtigung der geforderten Zuverlässigkeit und Latenz zu minimieren. Der heuristische Algorithmus "Density of Traffic-based Resource Allocation (DeTRA)" wird als Lösung vorgeschlagen. Bei dem vorgeschlagenen Algorithmus wird der Ressourcenpool in Abhängigkeit von der Verkehrsdichte pro Verkehrsart aufgeteilt. Die zufällige Auswahl erfolgt zwingend auf dem dedizierten Ressourcenpool beim Eintreffen aperiodischer Daten. Die Simulationsergebnisse zeigen, dass der vorgeschlagene Algorithmus die gleichen Ergebnisse für die Paketempfangsrate (PRR) erreicht, wie der sensing-basierte Algorithmus. Zusätzlich wird der Stromverbrauch des Endgeräts reduziert und damit die Energieeffizienz durch die Anwendung des DeTRA-Algorithmus verbessert. In dieser Arbeit werden Techniken zur Allokation von Funkressourcen in der LTE-basierten D2D-Kommunikation erforscht und eingesetzt, mit dem Ziel Funkressourcen effizienter zu nutzen. Darüber hinaus ist der in dieser Arbeit vorgestellte Ansatz eine Basis für zukünftige Untersuchungen, wie akkubasierte Endgeräte mit minimalem Stromverbrauch in der NR-V2X-Kommunikation Funkressourcen optimal auswählen können.Next-generation cellular networks are envisioned to enable widely Device-to-Device (D2D) communication. For many applications in the D2D domain, deterministic communication latency and high reliability are of exceptionally high importance. The proximity service provided by D2D communication is a promising feature that can fulfil the reliability and latency requirements of emerging vertical applications. One of the prominent vertical applications is vehicular communication, in which the vehicles disseminate safety messages directly through D2D communication, resulting in the fatality rate reduction due to a possible collision. Radio resource allocation techniques in D2D communication have recently gained much attention in industry and academia, through which valuable radio resources are allocated more efficiently. In addition to the resource allocation techniques, energy sustainability is highly important and is usually considered in conjunction with the resource allocation approach. This dissertation is dedicated to studying different avenues of the radio resource allocation and energy efficiency techniques in Long Term Evolution (LTE) and New Radio (NR) Vehicle-to-Everythings (V2X) communications. In the following, we briefly describe the core ideas in this study. Mostly, the D2D applications are characterized by relatively small traffic payload size, and in LTE, due to coarse granularity of the subframe, the radio resources can not be utilized efficiently. Particularly, in the case of semi-persistent scheduling when a radio resource is scheduled for a longer time in the overlay D2D, the radio resources are underutilized for such applications. To address this problem, a hierarchical radio resource management scheme, i.e., a sub-granting scheme, is proposed by which nearby cellular users, i.e., beneficiary users, are allowed to reuse the unused radio resource indicated by sub-granting signaling. The proposed scheme is evaluated and compared with shortening Transmission Time Interval (TTI) schemes in terms of cell throughput. Then, the beneficiary user selection problem is investigated and is cast as a maximization problem of uplink cell throughput subject to reliability and latency requirements. A heuristic centralized, i.e., dedicated sub-granting radio resource Dedicated Sub-Granting Radio Resource (DSGRR) algorithm is proposed to address the original beneficiary user selection problem. The simulation results and analysis show the superiority of the proposed DSGRR algorithm over the random beneficiary user selection algorithm in terms of the cell throughput in a scenario with stationary users. Further, the beneficiary user selection problem is investigated in a scenario where all users are moving in a dynamic environment. We evaluate the sub-granting signaling overhead due to mobility in the DSGRR, and then a distributed heuristics algorithm, i.e., Open Sub-Granting Radio Resource (OSGRR), is proposed and compared with the DSGRR algorithm and no sub-granting case. Simulation results show improved cell throughput for the OSGRR compared with other algorithms. Besides, it is observed that the overhead incurred by the OSGRR is less than the DSGRR while the achieved cell throughput is yet close to the maximum achievable uplink cell throughput. Also, joint resource allocation and energy efficiency in autonomous resource selection in NR, i.e. Mode 2, is examined. The autonomous resource selection is formulated as a ratio of sum-rate and energy consumption. The objective is to minimize the energy efficiency of the power-saving users subject to reliability and latency requirements. A heuristic algorithm, density of traffic-based resource allocation (DeTRA), is proposed to solve the problem. The proposed algorithm splits the resource pool based on the traffic density per traffic type. The random selection is then mandated to be performed on the dedicated resource pool upon arrival of the aperiodic traffic is triggered. The simulation results show that the proposed algorithm achieves the same packet reception ratio (PRR) value as the sensing-based algorithm. In addition, per-user power consumption is reduced, and consequently, the energy efficiency is improved by applying the DeTRA algorithm. The research in this study leverages radio resource allocation techniques in LTE based D2D communications to be utilized radio resources more efficiently. In addition, the conducted research paves a way to study further how the power-saving users would optimally select the radio resources with minimum energy consumption in NR V2X communications

    Load Balancing for Multiplexing Gains of BBU Pool in 5G Cloud Radio Access Networks

    Get PDF
    Cloud Radio Access Network (C-RAN) is an architecture for 5G cellular networks to improve coverage, increase data rates, enhancing signaling efficiency etc. In C-RAN architecture of 5G cellular networks, multiple Base Station (BS) Base Band processing Units (BBU) are centralized in the cloud. Remote Radio Heads (RRHs) that reside at cell sites will have only antennas and other radio frequency functions. The central cloud based system will provide higher layer protocols of LTEBS that process on a pool of BBUs on top of a pool of computing resources i.e., General Purpose Processors (GPPs). The centralized BBU pool and RRHs are connected with high speed optical fiber links. Each BBU maps to a GPP that has a specified processing capacity and processes In-phase Quadrature (IQ) samples received from Remote Radio Heads (RRHs) deployed at cell sites. A single BBU can serve multiple RRHs based on the limits imposed on processing capacity of GPP. C-RAN helps telecom service providers in cutting down their CAPEX and OPEX by reducing power consumption of BBUs

    Ethernet Fronthaul and Time-Sensitive Networking for 5G and Beyond Mobile Networks

    Get PDF
    Ethernet has been proposed to be used as the transport technology in the future fronthaul network. For this purpose, a model of switched Ethernet architecture is developed and presented in order to characterise the performance of an Ethernet mobile fronthaul network. The effects of traditional queuing regimes, including Strict Priority (SP) and Weighted Round Robin (WRR), on the delay and delay variation of LTE streams under the presence of background Ethernet traffic are investigated using frame inter-arrival delay statistics. The results show the effect of different background traffic rates and frame sizes on the mean and Standard Deviation (STD) of the LTE traffic frame inter-arrival delay and the importance of selecting the most suitable queuing regime based on the priority level and time sensitivity of the different traffic types. While SP can be used with traffic types that require low delay and Frame Delay variation (FDV), this queuing regime does not guarantee that the time sensitive traffic will not encounter an increase in delay and FDV as a result of contention due to the lack of pre-emptive mechanisms. Thus, the need for a queuing regime that can overcome the limitations of traditional queuing regimes is shown. To this extent, Time Sensitive Networking (TSN) for an Ethernet fronthaul network is modelled. Different modelling approaches for a Time Aware Shaper (TAS) based on the IEEE 802.1Qbv standard in Opnet/Riverbed are presented. The TAS model is assumed to be the scheduling entity in an Ethernet-based fronthaul network model, located in both the Ethernet switches and traffic sources. The TAS with/without queuing at the end stations has been presented as well. The performance of the TAS is compared to that of SP and WRR and is quantified through the FDV of the high priority traffic when this contends with lower priority traffic. The results show that with the TAS, contentioninduced FDV can be minimized or even completely removed. Furthermore, variations in the processing times of networking equipment, due to the envisaged softwarization of the next generation mobile network, which can lead to time variation in the generation instances of traffic in the Ethernet fronthaul network (both in the end-nodes and in switches/aggregators), have been considered in the TAS design. The need for a Global Scheduler (GS) and Software Defined Networking (SDN) with TAS is also discussed. An Upper Physical layer functional Split (UPS), specifically a pre-resource mapper split, for an evolved Ethernet fronthaul network is modelled. Using this model and by incorporating additional traffic sources, an investigation of the frame delay and FDV limitations in this evolved fronthaul is carried out. The results show that contention in Ethernet switch output ports causes an increase in the delay and FDV beyond proposed specifications for the UPS and other time sensitive traffic, such as legacy Common Public Radio Interface (CPRI)-type traffic. While TAS can significantly reduce or even remove FDV for UPS traffic and CPRI-type traffic, it is shown that TAS design aspects have to carefully consider the different transmission characteristics, especially the transmission pattern, of the contending traffic flows. For this reason, different traffic allocations within TAS window sections are proposed. Furthermore, it is demonstrated that increased link rates will be important in enabling longer fronthaul fibre spans (more than ten Kilometres fibre spans with ten Gigabit Ethernet links). The results also show that using multiple hops (Ethernet switches/aggregators) in the network can result in a reduction in the amount of UPS traffic that can be received within the delay and FDV specifications. As a result, careful considerations of the fibre span length and the number of hops in the fronthaul network should be made
    corecore