89 research outputs found
How to Solve the Fronthaul Traffic Congestion Problem in H-CRAN?
The design of efficient wireless fronthaul connections for future heterogeneous networks incorporating emerging paradigms such as heterogeneous cloud radio access network (H-CRAN) has become a challenging task that requires the most effective utilization of fronthaul network resources. In this paper, we propose and analyze possible solutions to facilitate the fronthaul traffic congestion in the scenario of Coordinated Multi-Point (CoMP) for 5G cellular traffic which is expected to reach ZetaByte by 2017. In particular, we propose to use distributed compression to reduce the fronthaul traffic for H-CRAN. Unlike the conventional approach where each coordinating point quantizes and forwards its own observation to the processing centre, these observations are compressed before forwarding. At the processing centre, the decompression of the observations and the decoding of the user messages are conducted in a joint manner. Our results reveal that, in both dense and ultra-dense urban small cell deployment scenarios, the usage of distributed compression can efficiently reduce the required fronthaul rate by more than 50% via joint operation
Implementation of Deep-Learning-Based CSI Feedback Reporting on 5G NR-Compliant Link-Level Simulator
Advances in machine learning have widened the range of its applications in many fields. In particular, deep learning has attracted much interest for its ability to provide solutions where the derivation of a rigorous mathematical model of the problem is troublesome. Our interest was drawn to the application of deep learning for channel state information feedback reporting, a crucial problem in frequency division duplexing (FDD) 5G networks, where knowledge of the channel characteristics is fundamental to exploiting the full potential of multiple-input multiple-output (MIMO) systems. We designed a framework adopting a 5G New Radio convolutional neural network, called NR-CsiNet, with the aim of compressing the channel matrix experienced by the user at the receiver side and then reconstructing it at the transmitter side. In contrast to similar solutions, our framework is based on a 5G New Radio fully compliant simulator, thus implementing a channel generator based on the latest 3GPP 3-D channel model. Moreover, realistic 5G scenarios are considered by including multi-receiving antenna schemes and noisy downlink channel estimation. Simulations were carried out to analyze and compare the performance with current feedback reporting schemes, showing promising results for this approach from the point of view of the block error rate and throughput of the 5G data channel
System Modelling and Design Aspects of Next Generation High Throughput Satellites
Future generation wireless networks are targeting the convergence of fixed,
mobile and broadcasting systems with the integration of satellite and
terrestrial systems towards utilizing their mutual benefits. Satellite
Communications (Sat- Com) is envisioned to play a vital role to provide
integrated services seamlessly over heterogeneous networks. As compared to
terrestrial systems, the design of SatCom systems require a different approach
due to differences in terms of wave propagation, operating frequency, antenna
structures, interfering sources, limitations of onboard processing, power
limitations and transceiver impairments. In this regard, this letter aims to
identify and discuss important modeling and design aspects of the next
generation High Throughput Satellite (HTS) systems. First, communication models
of HTSs including the ones for multibeam and multicarrier satellites, multiple
antenna techniques, and for SatCom payloads and antennas are highlighted and
discussed. Subsequently, various design aspects of SatCom transceivers
including impairments related to the transceiver, payload and channel, and
traffic-based coverage adaptation are presented. Finally, some open topics for
the design of next generation HTSs are identified and discussed.Comment: submitted to IEEE Journa
A Very Brief Introduction to Machine Learning With Applications to Communication Systems
Given the unprecedented availability of data and computing resources, there
is widespread renewed interest in applying data-driven machine learning methods
to problems for which the development of conventional engineering solutions is
challenged by modelling or algorithmic deficiencies. This tutorial-style paper
starts by addressing the questions of why and when such techniques can be
useful. It then provides a high-level introduction to the basics of supervised
and unsupervised learning. For both supervised and unsupervised learning,
exemplifying applications to communication networks are discussed by
distinguishing tasks carried out at the edge and at the cloud segments of the
network at different layers of the protocol stack
Linear system design for compression and fusion
2013 Fall.Includes bibliographical references.This is a study of measurement compression and fusion design. The idea common to both problems is that measurements can often be linearly compressed into lower-dimensional spaces without introducing too much excess mean-squared error or excess volume in a concentration ellipse. The question is how to design the compression to minimize the excesses at any given dimension. The first part of this work is motivated by sensing and wireless communication, where data compression or dimension reduction may be used to reduce the required communication bandwidth. The high-dimensional measurements are converted into low-dimensional representations through linear compression. Our aim is to compress a noisy measurement, allowing for the fact that the compressed measurement will be transmitted over a noisy channel. We review optimal compression with no transmission noise and show its connection with canonical coordinates. When the compressed measurement is transmitted with noise, we give the closed-form expression for the optimal compression matrix with respect to the trace and determinant of the error covariance matrix. We show that the solutions are canonical coordinate solutions, scaled by coefficients which account for canonical correlations and transmission noise variance, followed by a coordinate transformation into the sub-dominant invariant subspace of the channel noise. The second part of this work is a problem of integrating multiple sources of measurements. We consider two multiple-input-multiple-output channels, a primary channel and a secondary channel, with dependent input signals. The primary channel carries the signal of interest, and the secondary channel carries a signal that shares a joint distribution with the primary signal. The problem of particular interest is designing the secondary channel, with a fixed primary channel. We formulate the problem as an optimization problem, in which the optimal secondary channel maximizes an information-based criterion. An analytic solution is provided in a special case. Two fast-to-compute algorithms, one extrinsic and the other intrinsic, are proposed to approximate the optimal solutions in general cases. In particular, the intrinsic algorithm exploits the geometry of the unit sphere, a manifold embedded in Euclidean space. The performances of the proposed algorithms are examined through a simulation study. A discussion of the choice of dimension for the secondary channel is given, leading to rules for dimension reduction
Enabling AI in Future Wireless Networks: A Data Life Cycle Perspective
Recent years have seen rapid deployment of mobile computing and Internet of
Things (IoT) networks, which can be mostly attributed to the increasing
communication and sensing capabilities of wireless systems. Big data analysis,
pervasive computing, and eventually artificial intelligence (AI) are envisaged
to be deployed on top of the IoT and create a new world featured by data-driven
AI. In this context, a novel paradigm of merging AI and wireless
communications, called Wireless AI that pushes AI frontiers to the network
edge, is widely regarded as a key enabler for future intelligent network
evolution. To this end, we present a comprehensive survey of the latest studies
in wireless AI from the data-driven perspective. Specifically, we first propose
a novel Wireless AI architecture that covers five key data-driven AI themes in
wireless networks, including Sensing AI, Network Device AI, Access AI, User
Device AI and Data-provenance AI. Then, for each data-driven AI theme, we
present an overview on the use of AI approaches to solve the emerging
data-related problems and show how AI can empower wireless network
functionalities. Particularly, compared to the other related survey papers, we
provide an in-depth discussion on the Wireless AI applications in various
data-driven domains wherein AI proves extremely useful for wireless network
design and optimization. Finally, research challenges and future visions are
also discussed to spur further research in this promising area.Comment: Accepted at the IEEE Communications Surveys & Tutorials, 42 page
- …