9 research outputs found

    Control-Data Separation with Decentralized Edge Control in Fog-Assisted Uplink Communications

    Get PDF
    Fog-aided network architectures for 5G systems encompass wireless edge nodes, referred to as remote radio systems (RRSs), as well as remote cloud center (RCC) processors, which are connected to the RRSs via a fronthaul access network. RRSs and RCC are operated via Network Functions Virtualization (NFV), enabling a flexible split of network functionalities that adapts to network parameters such as fronthaul latency and capacity. This work focuses on uplink communications and investigates the cloud-edge allocation of two important network functions, namely the control functionality of rate selection and the data-plane function of decoding. Three functional splits are considered: (i) Distributed Radio Access Network (D-RAN), in which both functions are implemented in a decentralized way at the RRSs, (ii) Cloud RAN (C-RAN), in which instead both functions are carried out centrally at the RCC, and (iii) a new functional split, referred to as Fog RAN (F-RAN), with separate decentralized edge control and centralized cloud data processing. The model under study consists of a time-varying uplink channel in which the RCC has global but delayed channel state information (CSI) due to fronthaul latency, while the RRSs have local but more timely CSI. Using the adaptive sum-rate as the performance criterion, it is concluded that the F-RAN architecture can provide significant gains in the presence of user mobility.Comment: 28 pages, 11 figures. This manuscript was presented in part at arXiv:1606.0913

    Cloud-aided wireless systems: communications and radar applications

    Get PDF
    This dissertation focuses on cloud-assisted radio technologies for communication, including mobile cloud computing and Cloud Radio Access Network (C-RAN), and for radar systems. This dissertation first concentrates on cloud-aided communications. Mobile cloud computing, which allows mobile users to run computationally heavy applications on battery limited devices, such as cell phones, is considered initially. Mobile cloud computing enables the offloading of computation-intensive applications from a mobile device to a cloud processor via a wireless interface. The interplay between offloading decisions at the application layer and physical-layer parameters, which determine the energy and latency associated with the mobile-cloud communication, motivates the inter-layer optimization of fine-grained task offloading across both layers. This problem is modeled by using application call graphs, and the joint optimization of application-layer and physical-layer parameters is carried out via a message passing algorithm by minimizing the total energy expenditure of the mobile user. The concept of cloud radio is also being considered for the development of two cellular architectures known as Distributed RAN (D-RAN) and C-RAN, whereby the baseband processing of base stations is carried out in a remote Baseband Processing Unit (BBU). These architectures can reduce the capital and operating expenses of dense deployments at the cost of increasing the communication latency. The effect of this latency, which is due to the fronthaul transmission between the Remote Radio Head (RRH) and the BBU, is then studied for implementation of Hybrid Automatic Repeat Request (HARQ) protocols. Specifically, two novel solutions are proposed, which are based on the control-data separation architecture. The trade-offs involving resources such as the number of transmitting and receiving antennas, transmission power and the blocklength of the transmitted codeword, and the performance of the proposed solutions is investigated in analysis and numerical results. The detection of a target in radar systems requires processing of the signal that is received by the sensors. Similar to cloud radio access networks in communications, this processing of the signals can be carried out in a remote Fusion Center (FC) that is connected to all sensors via limited-capacity fronthaul links. The last part of this dissertation is dedicated to exploring the application of cloud radio to radar systems. In particular, the problem of maximizing the detection performance at the FC jointly over the code vector used by the transmitting antenna and over the statistics of the noise introduced by quantization at the sensors for fronthaul transmission is investigated by adopting the information-theoretic criterion of the Bhattacharyya distance and information-theoretic bounds on the quantization rate

    5G Backhaul Challenges and Emerging Research Directions: A Survey

    Get PDF
    5G is the next cellular generation and is expected to quench the growing thirst for taxing data rates and to enable the Internet of Things. Focused research and standardization work have been addressing the corresponding challenges from the radio perspective while employing advanced features, such as network densi cation, massive multiple-input-multiple-output antennae, coordinated multi-point processing, intercell interference mitigation techniques, carrier aggregation, and new spectrum exploration. Nevertheless, a new bottleneck has emerged: the backhaul. The ultra-dense and heavy traf c cells should be connected to the core network through the backhaul, often with extreme requirements in terms of capacity, latency, availability, energy, and cost ef ciency. This pioneering survey explains the 5G backhaul paradigm, presents a critical analysis of legacy, cutting-edge solutions, and new trends in backhauling, and proposes a novel consolidated 5G backhaul framework. A new joint radio access and backhaul perspective is proposed for the evaluation of backhaul technologies which reinforces the belief that no single solution can solve the holistic 5G backhaul problem. This paper also reveals hidden advantages and shortcomings of backhaul solutions, which are not evident when backhaul technologies are inspected as an independent part of the 5G network. This survey is key in identifying essential catalysts that are believed to jointly pave the way to solving the beyond-2020 backhauling challenge. Lessons learned, unsolved challenges, and a new consolidated 5G backhaul vision are thus presented

    A flexible network architecture for 5G systems

    Get PDF
    In this paper, we define a flexible, adaptable, and programmable architecture for 5G mobile networks, taking into consideration the requirements, KPIs, and the current gaps in the literature, based on three design fundamentals: (i) split of user and control plane, (ii) service-based architecture within the core network (in line with recent industry and standard consensus), and (iii) fully flexible support of E2E slicing via per-domain and cross-domain optimisation, devising inter-slice control and management functions, and refining the behavioural models via experiment-driven optimisation. The proposed architecture model further facilitates the realisation of slices providing specific functionality, such as network resilience, security functions, and network elasticity. The proposed architecture consists of four different layers identified as network layer, controller layer, management and orchestration layer, and service layer. A key contribution of this paper is the definition of the role of each layer, the relationship between layers, and the identification of the required internal modules within each of the layers. In particular, the proposed architecture extends the reference architectures proposed in the Standards Developing Organisations like 3GPP and ETSI, by building on these while addressing several gaps identified within the corresponding baseline models. We additionally present findings, the design guidelines, and evaluation studies on a selected set of key concepts identified to enable flexible cloudification of the protocol stack, adaptive network slicing, and inter-slice control and management.This work has been performed in the framework of the H2020 project 5G-MoNArch co-funded by the E

    Nuberu : Reliable RAN Virtualization in Shared Platforms

    Get PDF
    RAN virtualization will become a key technology for the last mile of next-generation mobile networks driven by initiatives such as the O-RAN alliance. However, due to the computing fluctuations inherent to wireless dynamics and resource contention in shared computing infrastructure, the price to migrate from dedicated to shared platforms may be too high. Indeed, we show in this paper that the baseline architecture of a base station¿s distributed unit (DU) collapses upon moments of deficit in computing capacity. Recent solutions to accelerate some signal processing tasks certainly help but do not tackle the core problem: a DU pipeline that requires predictable computing to provide carrier-grade reliability. We present Nuberu, a novel pipeline architecture for 4G/5G DUs specifically engineered for non-deterministic computing platforms. Our design has one key objective to attain reliability: to guarantee a minimum set of signals that preserve synchronization between the DU and its users during computing capacity shortages and, provided this, maximize network throughput. To this end, we use techniques such as tight deadline control, jitter-absorbing buffers, predictive HARQ, and congestion control. Using an experimental prototype, we show that Nuberu attains 95% of the theoretical spectrum efficiency in hostile environments, where state-of-art approaches lose connectivity, and at least 80% resource savingsWe would like to thank our shepherd and reviewers for their valuable comments and feedback. This work has been supported by the European Commission through Grant No. 101017109 (DAEMON project) and Grant No. 101015956 (Hexa-X project), and the CERCA Programme/Generalitat de Catalunya

    Leveraging Quantum Annealing for Large MIMO Processing in Centralized Radio Access Networks

    Full text link
    User demand for increasing amounts of wireless capacity continues to outpace supply, and so to meet this demand, significant progress has been made in new MIMO wireless physical layer techniques. Higher-performance systems now remain impractical largely only because their algorithms are extremely computationally demanding. For optimal performance, an amount of computation that increases at an exponential rate both with the number of users and with the data rate of each user is often required. The base station's computational capacity is thus becoming one of the key limiting factors on wireless capacity. QuAMax is the first large MIMO centralized radio access network design to address this issue by leveraging quantum annealing on the problem. We have implemented QuAMax on the 2,031 qubit D-Wave 2000Q quantum annealer, the state-of-the-art in the field. Our experimental results evaluate that implementation on real and synthetic MIMO channel traces, showing that 10~μ\mus of compute time on the 2000Q can enable 48 user, 48 AP antenna BPSK communication at 20 dB SNR with a bit error rate of 10−610^{-6} and a 1,500 byte frame error rate of 10−410^{-4}.Comment: https://dl.acm.org/doi/10.1145/3341302.334207

    Enabling Technologies for Ultra-Reliable and Low Latency Communications: From PHY and MAC Layer Perspectives

    Full text link
    © 1998-2012 IEEE. Future 5th generation networks are expected to enable three key services-enhanced mobile broadband, massive machine type communications and ultra-reliable and low latency communications (URLLC). As per the 3rd generation partnership project URLLC requirements, it is expected that the reliability of one transmission of a 32 byte packet will be at least 99.999% and the latency will be at most 1 ms. This unprecedented level of reliability and latency will yield various new applications, such as smart grids, industrial automation and intelligent transport systems. In this survey we present potential future URLLC applications, and summarize the corresponding reliability and latency requirements. We provide a comprehensive discussion on physical (PHY) and medium access control (MAC) layer techniques that enable URLLC, addressing both licensed and unlicensed bands. This paper evaluates the relevant PHY and MAC techniques for their ability to improve the reliability and reduce the latency. We identify that enabling long-term evolution to coexist in the unlicensed spectrum is also a potential enabler of URLLC in the unlicensed band, and provide numerical evaluations. Lastly, this paper discusses the potential future research directions and challenges in achieving the URLLC requirements
    corecore