1,150 research outputs found

    Enabling 5G Edge Native Applications

    Get PDF

    Resource slicing in virtual wireless networks: a survey

    Get PDF
    New architectural and design approaches for radio access networks have appeared with the introduction of network virtualization in the wireless domain. One of these approaches splits the wireless network infrastructure into isolated virtual slices under their own management, requirements, and characteristics. Despite the advances in wireless virtualization, there are still many open issues regarding the resource allocation and isolation of wireless slices. Because of the dynamics and shared nature of the wireless medium, guaranteeing that the traffic on one slice will not affect the traffic on the others has proven to be difficult. In this paper, we focus on the detailed definition of the problem, discussing its challenges. We also provide a review of existing works that deal with the problem, analyzing how new trends such as software defined networking and network function virtualization can assist in the slicing. We will finally describe some research challenges on this topic.Peer ReviewedPostprint (author's final draft

    5GAuRA. D3.3: RAN Analytics Mechanisms and Performance Benchmarking of Video, Time Critical, and Social Applications

    Get PDF
    5GAuRA deliverable D3.3.This is the final deliverable of Work Package 3 (WP3) of the 5GAuRA project, providing a report on the project’s developments on the topics of Radio Access Network (RAN) analytics and application performance benchmarking. The focus of this deliverable is to extend and deepen the methods and results provided in the 5GAuRA deliverable D3.2 in the context of specific use scenarios of video, time critical, and social applications. In this respect, four major topics of WP3 of 5GAuRA – namely edge-cloud enhanced RAN architecture, machine learning assisted Random Access Channel (RACH) approach, Multi-access Edge Computing (MEC) content caching, and active queue management – are put forward. Specifically, this document provides a detailed discussion on the service level agreement between tenant and service provider in the context of network slicing in Fifth Generation (5G) communication networks. Network slicing is considered as a key enabler to 5G communication system. Legacy telecommunication networks have been providing various services to all kinds of customers through a single network infrastructure. In contrast, by deploying network slicing, operators are now able to partition one network into individual slices, each with its own configuration and Quality of Service (QoS) requirements. There are many applications across industry that open new business opportunities with new business models. Every application instance requires an independent slice with its own network functions and features, whereby every single slice needs an individual Service Level Agreement (SLA). In D3.3, we propose a comprehensive end-to-end structure of SLA between the tenant and the service provider of sliced 5G network, which balances the interests of both sides. The proposed SLA defines reliability, availability, and performance of delivered telecommunication services in order to ensure that right information is delivered to the right destination at right time, safely and securely. We also discuss the metrics of slicebased network SLA such as throughput, penalty, cost, revenue, profit, and QoS related metrics, which are, in the view of 5GAuRA, critical features of the agreement.Peer ReviewedPostprint (published version

    Integrated Access and Backhaul for 5G and Beyond (6G)

    Get PDF
    Enabling network densification to support coverage-limited millimeter wave (mmWave) frequencies is one of the main requirements for 5G and beyond. It is challenging to connect a high number of base stations (BSs) to the core network via a transport network. Although fiber provides high-rate reliable backhaul links, it requires a noteworthy investment for trenching and installation, and could also take a considerable deployment time. Wireless backhaul, on the other hand, enables fast installation and flexibility, at the cost of data rate and sensitivity to environmental effects. For these reasons, fiber and wireless backhaul have been the dominant backhaul technologies for decades. Integrated access and backhaul (IAB), where along with celluar access services a part of the spectrum available is used to backhaul, is a promising wireless solution for backhauling in 5G and beyond. To this end, in this thesis we evaluate, analyze and optimize IAB networks from various perspectives. Specifically, we analyze IAB networks and develop effective algorithms to improve service coverage probability. In contrast to fiber-connected setups, an IAB network may be affected by, e.g., blockage, tree foliage, and rain loss. Thus, a variety of aspects such as the effects of tree foliage, rain loss, and blocking are evaluated and the network performance when part of the network being non-IAB backhauled is analysed. Furthermore, we evaluate the effect of deployment optimization on the performance of IAB networks.First, in Paper A, we introduce and analyze IAB as an enabler for network densification. Then, we study the IAB network from different aspects of mmWave-based communications: We study the network performance for both urban and rural areas considering the impacts of blockage, tree foliage, and rain. Furthermore, performance comparisons are made between IAB and networks of which all or part of small BSs are fiber-connected. Following the analysis, it is observed that IAB may be a good backhauling solution with high flexibility and low time-to-market. The second part of the thesis focuses on improving the service coverage probability by carrying out topology optimization in IAB networks focusing on mmWave communication for different parameters, such as blockage, tree foliage, and antenna gain. In Paper B, we study topology optimization and routing in IAB networks in different perspectives. Thereby, we design efficient Genetic algorithm (GA)-based methods for IAB node distribution and non-IAB backhaul link placement. Furthermore, we study the effect of routing in the cases with temporal blockages. Finally, we briefly study the recent standardization developments, i.e., 3GPP Rel-16 as well as the\ua0Rel-17 discussions on routing. As the results show, with a proper planning on network deployment, IAB is an attractive solution to densify the networks for 5G and beyond. Finally, we focus on improving the performance of IAB networks with constrained deployment optimization. In Paper C, we consider various IAB network models while presenting different algorithms for constrained deployment optimization. Here, the constraints are coming from either inter-IAB distance limitations or geographical restrictions. As we show, proper network planning can considerably improve service coverage probability of IAB networks with deployment constraints

    Heterogeneous Acceleration for 5G New Radio Channel Modelling Using FPGAs and GPUs

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Stochastic Geometry Based Performance Study in 5G Wireless Networks

    Get PDF
    As the complexity of modern cellular networks continuously increases along with the evolution of technologies and the quick explosion of mobile data traffic, conventional large scale system level simulations and analytical tools become either too complicated or less tractable and accurate. Therefore, novel analytical models are actively pursued. In recent years, stochastic geometry models have been recognized as powerful tools to analyze the key performance metrics of cellular networks. In this dissertation, stochastic geometry based analytical models are developed to analyze the performance of some key technologies proposed for 5G mobile networks. Particularly, Device-to-Device (D2D) communication, Non-orthogonal multiple access (NOMA), and ultra-dense networks (UDNs) are investigated and analyzed by stochastic geometry models, more specifically, Poisson Point Process (PPP) models. D2D communication enables direct communication between mobile users in proximity to each other bypassing base station (BS). Embedding D2D communication into existing cellular networks brings many benefits such as improving spectrum efficiency, decreasing power energy consumption, and enabling novel location-based services. However, these benefits may not be fully exploited if the co-channel interference among D2D users and cellular users is not properly tackled. In this dissertation, various frequency reuse and power control schemes are proposed, aiming at mitigating the interference between D2D users and conventional cellular users. The performance gain of proposed schemes is analyzed on a system modeled by a 2-tier PPP and validated by numerical simulations. NOMA is a promising radio access technology for 5G cellular networks. Different with widely applied orthogonal multiple access (OMA) such as orthogonal frequency division multiple access (OFDMA) and single carrier frequency division multiple access (SC-FDMA), NOMA allows multiple users to use the same frequency/time resource and offers many advantages such as improving spectral efficiency, enhancing connectivity, providing higher cell-edge throughput, and reducing transmission latency. Although some initial performance analysis has been done on NOMA with single cell scenario, the system level performance of NOMA in a multi-cell scenario is not investigated in existing work. In this dissertation, analytical frameworks are developed to evaluate the performance of a wireless network with NOMA on both downlink and uplink. Distinguished from existing publications on NOMA, the framework developed in this dissertation is the first one that takes inter-cell interference into consideration. UDN is another key technology for 5G wireless networks to achieve high capacity and coverage. Due to the existence of line-of-sight (LoS)/non-line-of-sight (NLoS) propagation and bounded path loss behavior in UDN networks, the tractability of the original PPP model diminishes when analyzing the performance of UDNs. Therefore, a dominant BS (base station)-based approximation model is developed in this dissertation. By applying reasonable mathematical approximations, the tractability of the PPP model is preserved and the closed form solution can be derived. The numerical results demonstrate that the developed analytical model is accurate in a wide range of network densities. The analysis conducted in this dissertation demonstrates that stochastic geometry models can serve as powerful tools to analyze the performance of 5G technologies in a dense wireless network deployment. The frameworks developed in this dissertation provide general yet powerful analytical tools that can be readily extended to facilitate other research in wireless networks

    FPGA Acceleration of 3GPP Channel Model Emulator for 5G New Radio

    Get PDF
    The channel model is by far the most computing intensive part of the link level simulations of multiple-input and multiple-output (MIMO) fifth-generation new radio (5G NR) communication systems. Simulation effort further increases when using more realistic geometry-based channel models, such as the three-dimensional spatial channel model (3D-SCM). Channel emulation is used for functional and performance verification of such models in the network planning phase. These models use multiple finite impulse response (FIR) filters and have a very high degree of parallelism which can be exploited for accelerated execution on Field Programmable Gate Array (FPGA) and Graphics Processing Unit (GPU) platforms. This paper proposes an efficient re-configurable implementation of the 3rd generation partnership project (3GPP) 3D-SCM on FPGAs using a design flow based on high-level synthesis (HLS). It studies the effect of various HLS optimization techniques on the total latency and hardware resource utilization on Xilinx Alveo U280 and Intel Arria 10GX 1150 high-performance FPGAs, using in both cases the commercial HLS tools of the producer. The channel model accuracy is preserved using double precision floating point arithmetic. This work analyzes in detail the effort to target the FPGA platforms using HLS tools, both in terms of common parallelization effort (shared by both FPGAs), and in terms of platform-specific effort, different for Xilinx and Intel FPGAs. Compared to the baseline general-purpose central processing unit (CPU) implementation, the achieved speedups are 65X and 95X using the Xilinx UltraScale+ and Intel Arria FPGA platform respectively, when using a Double Data Rate (DDR) memory interface. The FPGA-based designs also achieved ~3X better performance compared to a similar technology node NVIDIA GeForce GTX 1070 GPU, while consuming ~4X less energy. The FPGA implementation speedup improves up to 173X over the CPU baseline when using the Xilinx UltraRAM (URAM) and High-Bandwidth Memory (HBM) resources, also achieving 6X lower latency and 12X lower energy consumption than the GPU implementation
    • …
    corecore