182 research outputs found

    Separation Framework: An Enabler for Cooperative and D2D Communication for Future 5G Networks

    Get PDF
    Soaring capacity and coverage demands dictate that future cellular networks need to soon migrate towards ultra-dense networks. However, network densification comes with a host of challenges that include compromised energy efficiency, complex interference management, cumbersome mobility management, burdensome signaling overheads and higher backhaul costs. Interestingly, most of the problems, that beleaguer network densification, stem from legacy networks' one common feature i.e., tight coupling between the control and data planes regardless of their degree of heterogeneity and cell density. Consequently, in wake of 5G, control and data planes separation architecture (SARC) has recently been conceived as a promising paradigm that has potential to address most of aforementioned challenges. In this article, we review various proposals that have been presented in literature so far to enable SARC. More specifically, we analyze how and to what degree various SARC proposals address the four main challenges in network densification namely: energy efficiency, system level capacity maximization, interference management and mobility management. We then focus on two salient features of future cellular networks that have not yet been adapted in legacy networks at wide scale and thus remain a hallmark of 5G, i.e., coordinated multipoint (CoMP), and device-to-device (D2D) communications. After providing necessary background on CoMP and D2D, we analyze how SARC can particularly act as a major enabler for CoMP and D2D in context of 5G. This article thus serves as both a tutorial as well as an up to date survey on SARC, CoMP and D2D. Most importantly, the article provides an extensive outlook of challenges and opportunities that lie at the crossroads of these three mutually entangled emerging technologies.Comment: 28 pages, 11 figures, IEEE Communications Surveys & Tutorials 201

    Energy-Efficient NOMA Enabled Heterogeneous Cloud Radio Access Networks

    Get PDF
    Heterogeneous cloud radio access networks (H-CRANs) are envisioned to be promising in the fifth generation (5G) wireless networks. H-CRANs enable users to enjoy diverse services with high energy efficiency, high spectral efficiency, and low-cost operation, which are achieved by using cloud computing and virtualization techniques. However, H-CRANs face many technical challenges due to massive user connectivity, increasingly severe spectrum scarcity and energy-constrained devices. These challenges may significantly decrease the quality of service of users if not properly tackled. Non-orthogonal multiple access (NOMA) schemes exploit non-orthogonal resources to provide services for multiple users and are receiving increasing attention for their potential of improving spectral and energy efficiency in 5G networks. In this article a framework for energy-efficient NOMA H-CRANs is presented. The enabling technologies for NOMA H-CRANs are surveyed. Challenges to implement these technologies and open issues are discussed. This article also presents the performance evaluation on energy efficiency of H-CRANs with NOMA.Comment: This work has been accepted by IEEE Network. Pages 18, Figure

    Resource management with adaptive capacity in C-RAN

    Get PDF
    This work was supported in part by the Spanish ministry of science through the projectRTI2018-099880-B-C32, with ERFD funds, and the Grant FPI-UPC provided by theUPC. It has been done under COST CA15104 IRACON EU project.Efficient computational resource management in 5G Cloud Radio Access Network (CRAN) environments is a challenging problem because it has to account simultaneously for throughput, latency, power efficiency, and optimization tradeoffs. This work proposes the use of a modified and improved version of the realistic Vienna Scenario that was defined in COST action IC1004, to test two different scale C-RAN deployments. First, a large-scale analysis with 628 Macro-cells (Mcells) and 221 Small-cells (Scells) is used to test different algorithms oriented to optimize the network deployment by minimizing delays, balancing the load among the Base Band Unit (BBU) pools, or clustering the Remote Radio Heads (RRH) efficiently to maximize the multiplexing gain. After planning, real-time resource allocation strategies with Quality of Service (QoS) constraints should be optimized as well. To do so, a realistic small-scale scenario for the metropolitan area is defined by modeling the individual time-variant traffic patterns of 7000 users (UEs) connected to different services. The distribution of resources among UEs and BBUs is optimized by algorithms, based on a realistic calculation of the UEs Signal to Interference and Noise Ratios (SINRs), that account for the required computational capacity per cell, the QoS constraints and the service priorities. However, the assumption of a fixed computational capacity at the BBU pools may result in underutilized or oversubscribed resources, thus affecting the overall QoS. As resources are virtualized at the BBU pools, they could be dynamically instantiated according to the required computational capacity (RCC). For this reason, a new strategy for Dynamic Resource Management with Adaptive Computational capacity (DRM-AC) using machine learning (ML) techniques is proposed. Three ML algorithms have been tested to select the best predicting approach: support vector machine (SVM), time-delay neural network (TDNN), and long short-term memory (LSTM). DRM-AC reduces the average of unused resources by 96 %, but there is still QoS degradation when RCC is higher than the predicted computational capacity (PCC). For this reason, two new strategies are proposed and tested: DRM-AC with pre-filtering (DRM-AC-PF) and DRM-AC with error shifting (DRM-AC-ES), reducing the average of unsatisfied resources by 99.9 % and 98 % compared to the DRM-AC, respectively

    Echo State Networks for Proactive Caching in Cloud-Based Radio Access Networks with Mobile Users

    Full text link
    In this paper, the problem of proactive caching is studied for cloud radio access networks (CRANs). In the studied model, the baseband units (BBUs) can predict the content request distribution and mobility pattern of each user, determine which content to cache at remote radio heads and BBUs. This problem is formulated as an optimization problem which jointly incorporates backhaul and fronthaul loads and content caching. To solve this problem, an algorithm that combines the machine learning framework of echo state networks with sublinear algorithms is proposed. Using echo state networks (ESNs), the BBUs can predict each user's content request distribution and mobility pattern while having only limited information on the network's and user's state. In order to predict each user's periodic mobility pattern with minimal complexity, the memory capacity of the corresponding ESN is derived for a periodic input. This memory capacity is shown to be able to record the maximum amount of user information for the proposed ESN model. Then, a sublinear algorithm is proposed to determine which content to cache while using limited content request distribution samples. Simulation results using real data from Youku and the Beijing University of Posts and Telecommunications show that the proposed approach yields significant gains, in terms of sum effective capacity, that reach up to 27.8% and 30.7%, respectively, compared to random caching with clustering and random caching without clustering algorithm.Comment: Accepted in the IEEE Transactions on Wireless Communication

    Machine learning based heuristic BBU-RRH switching scheme for C-RAN in 5G

    Get PDF
    The immense increase in bandwidth demand by various services such as high definition video streaming, online gaming, and virtual reality has made it increasingly challenging for operators to provide satisfactory services to the end users while making a profit. Cloud Radio Access Network (C-RAN) is a new architecture that has been proposed to facilitate the mobile networks' ability to meet the increase in bandwidth demand. C-RAN consists of three parts, namely Remote Radio Head (RRH), the front haul link, and Baseband Processing Units (BBU) pool. Many RRHs are associated with one BBU pool, and all RRHs within the pool are logically connected to every BBU in the pool. Thus, a BBU-RRH switching algorithm needs to be developed as it is able to enhance the performance of such architecture while managing the resource efficiently. This work mainly focuses on developing a traffic profile prediction-based BBU-RRH switching algorithm using a real life dataset. In the literature, there are related works that have proposed algorithms to achieve this purpose. However some of the existing algorithms suffer from high switching complexity while others fall short in QoS provision. Therefore, this work develops a BBU-RRH algorithm that to enhance the QoS while reducing the switching complexity, with the aid of machine learning techniques. The algorithm developed consists of three parts. The first part consists of an efficient RRH clustering mechanism that determines which RRHs are associated with a specific BBU pool. The second part utilizesrecurrent neural networks (RNN) to predict the daily traffic profile of RRHs, so that a relatively accurate traffic profile prediction can be obtained to facilitate the switching algorithm. Finally, the third part comprises the BBU-RRH switching scheme that works in conjunction with the predicted traffic profile to make an informed decision about the associations between RRHs and BBUs within the BBU pool. The performance of the proposed algorithm has been evaluated through simulations. The simulation results show that the proposed algorithm reduces the number of BBUs used and therefore save on energy. In addition, the algorithm reduces the occurrence of congestion and failure states, and thus improve the quality of the service of the network. Finally, the developed switching algorithm also reduces the switching complexity when compared with existing algorithms

    Millimetre wave frequency band as a candidate spectrum for 5G network architecture : a survey

    Get PDF
    In order to meet the huge growth in global mobile data traffic in 2020 and beyond, the development of the 5th Generation (5G) system is required as the current 4G system is expected to fall short of the provision needed for such growth. 5G is anticipated to use a higher carrier frequency in the millimetre wave (mm-wave) band, within the 20 to 90 GHz, due to the availability of a vast amount of unexploited bandwidth. It is a revolutionary step to use these bands because of their different propagation characteristics, severe atmospheric attenuation, and hardware constraints. In this paper, we carry out a survey of 5G research contributions and proposed design architectures based on mm-wave communications. We present and discuss the use of mm-wave as indoor and outdoor mobile access, as a wireless backhaul solution, and as a key enabler for higher order sectorisation. Wireless standards such as IEE802.11ad, which are operating in mm-wave band have been presented. These standards have been designed for short range, ultra high data throughput systems in the 60 GHz band. Furthermore, this survey provides new insights regarding relevant and open issues in adopting mm-wave for 5G networks. This includes increased handoff rate and interference in Ultra-Dense Network (UDN), waveform consideration with higher spectral efficiency, and supporting spatial multiplexing in mm-wave line of sight. This survey also introduces a distributed base station architecture in mm-wave as an approach to address increased handoff rate in UDN, and to provide an alternative way for network densification in a time and cost effective manner

    Machine learning adaptive computational capacity prediction for dynamic resource management in C-RAN

    Get PDF
    Efficient computational resource management in 5G Cloud Radio Access Network (C-RAN)environments is a challenging problem because it has to account simultaneously for throughput, latency,power efficiency, and optimization tradeoffs. The assumption of a fixed computational capacity at thebaseband unit (BBU) pools may result in underutilized or oversubscribed resources, thus affecting the overallQuality of Service (QoS). As resources are virtualized at the BBU pools, they could be dynamically instan-tiated according to the required computational capacity (RCC). In this paper, a new strategy for DynamicResource Management with Adaptive Computational capacity (DRM-AC) using machine learning (ML)techniques is proposed. Three ML algorithms have been tested to select the best predicting approach: supportvector machine (SVM), time-delay neural network (TDNN), and long short-term memory (LSTM). DRM-AC reduces the average of unused resources by 96 %, but there is still QoS degradation when RCC is higherthan the predicted computational capacity (PCC). To further improve, two new strategies are proposed andtested in a realistic scenario: DRM-AC with pre-filtering (DRM-AC-PF) and DRM-AC with error shifting(DRM-AC-ES), reducing the average of unsatisfied resources by 98 % and 99.9 % compared to the DRM-AC, respectivelyThis work was supported in part by the Spanish ministry of science through the project CRIN-5G (RTI2018-099880-B-C32) withERDF (European Regional Development Fund) and in part by the UPC through COST CA15104 IRACON EU Project and theFPI-UPC-2018 Grant.Peer ReviewedPostprint (published version
    • …
    corecore