59 research outputs found
FusionNet: Enhanced Beam Prediction for mmWave Communications Using Sub-6GHz Channel and A Few Pilots
In this paper, we propose a new downlink beamforming strategy for mmWave
communications using uplink sub-6GHz channel information and a very few mmWave
pilots. Specifically, we design a novel dual-input neural network, called
FusionNet, to extract and exploit the features from sub-6GHz channel and a few
mmWave pilots to accurately predict mmWave beam. To further improve the
beamforming performance and avoid over-fitting, we develop two data
pre-processing approaches utilizing channel sparsity and data augmentation. The
simulation results demonstrate superior performance and robustness of the
proposed strategy compared to the existing one that purely relies on the
sub-6GHz information, especially in the low signal-to-noise ratio (SNR)
regions
ViWi Vision-Aided mmWave Beam Tracking: Dataset, Task, and Baseline Solutions
Vision-aided wireless communication is motivated by the recent advances in
deep learning and computer vision as well as the increasing dependence on
line-of-sight links in millimeter wave (mmWave) and terahertz systems. By
leveraging vision, this new research direction enables an interesting set of
new capabilities such as vision-aided mmWave beam and blockage prediction,
proactive hand-off, and resource allocation among others. These capabilities
have the potential of reliably supporting highly-mobile applications such as
vehicular/drone communications and wireless virtual/augmented reality in mmWave
and terahertz systems. Investigating these interesting applications, however,
requires the development of special dataset and machine learning tasks. Based
on the Vision-Wireless (ViWi) dataset generation framework [1], this paper
develops an advanced and realistic scenario/dataset that features multiple base
stations, mobile users, and rich dynamics. Enabled by this dataset, the paper
defines the vision-wireless mmWave beam tracking task (ViWi-BT) and proposes a
baseline solution that can provide an initial benchmark for the future ViWi-BT
algorithms.Comment: The ViWi-BT Challenge at ICC 2020 -
https://www.viwi-dataset.net/viwi-bt.htm
ViWi: A Deep Learning Dataset Framework for Vision-Aided Wireless Communications
The growing role that artificial intelligence and specifically machine
learning is playing in shaping the future of wireless communications has opened
up many new and intriguing research directions. This paper motivates the
research in the novel direction of \textit{vision-aided wireless
communications}, which aims at leveraging visual sensory information in
tackling wireless communication problems. Like any new research direction
driven by machine learning, obtaining a development dataset poses the first and
most important challenge to vision-aided wireless communications. This paper
addresses this issue by introducing the Vision-Wireless (ViWi) dataset
framework. It is developed to be a parametric, systematic, and scalable data
generation framework. It utilizes advanced 3D-modeling and ray-tracing
softwares to generate high-fidelity synthetic wireless and vision data samples
for the same scenes. The result is a framework that does not only offer a way
to generate training and testing datasets but helps provide a common ground on
which the quality of different machine learning-powered solutions could be
assessed.Comment: IEEE VTC 2020. The ViWi datasets and applications are available at
https://www.viwi-dataset.ne
Vision-Aided Dynamic Blockage Prediction for 6G Wireless Communication Networks
Unlocking the full potential of millimeter-wave and sub-terahertz wireless
communication networks hinges on realizing unprecedented low-latency and
high-reliability requirements. The challenge in meeting those requirements lies
partly in the sensitivity of signals in the millimeter-wave and sub-terahertz
frequency ranges to blockages. One promising way to tackle that challenge is to
help a wireless network develop a sense of its surrounding using machine
learning. This paper attempts to do that by utilizing deep learning and
computer vision. It proposes a novel solution that proactively predicts
\textit{dynamic} link blockages. More specifically, it develops a deep neural
network architecture that learns from observed sequences of RGB images and
beamforming vectors how to predict possible future link blockages. The proposed
architecture is evaluated on a publicly available dataset that represents a
synthetic dynamic communication scenario with multiple moving users and
blockages. It scores a link-blockage prediction accuracy in the neighborhood of
86\%, a performance that is unlikely to be matched without utilizing visual
data.Comment: The dataset and code files will be available soon on the ViWi
website: https://www.viwi-dataset.net
Deep Learning Assisted mmWave Beam Prediction with Prior Low-frequency Information
Huge overhead of beam training poses a significant challenge to mmWave
communications. To address this issue, beam tracking has been widely
investigated whereas existing methods are hard to handle serious multipath
interference and non-stationary scenarios. Inspired by the spatial similarity
between low-frequency and mmWave channels in non-standalone architectures, this
paper proposes to utilize prior low-frequency information to predict the
optimal mmWave beam, where deep learning is adopted to enhance the prediction
accuracy. Specifically, periodically estimated low-frequency channel state
information (CSI) is applied to track the movement of user equipment, and
timing offset indicator is proposed to indicate the instant of mmWave beam
training relative to low-frequency CSI estimation. Meanwhile, long-short term
memory networks based dedicated models are designed to implement the
prediction. Simulation results show that our proposed scheme can achieve higher
beamforming gain than the conventional methods while requiring little overhead
of mmWave beam training
Federated mmWave Beam Selection Utilizing LIDAR Data
Efficient link configuration in millimeter wave (mmWave) communication systems is a crucial yet challenging task due to the overhead imposed by beam selection. For vehicle-to-infrastructure (V2I) networks, side information from LIDAR sensors mounted on the vehicles has been leveraged to reduce the beam search overhead. In this letter, we propose a federated LIDAR aided beam selection method for V2I mmWave communication systems. In the proposed scheme, connected vehicles collaborate to train a shared neural network (NN) on their locally available LIDAR data during normal operation of the system. We also propose a reduced-complexity convolutional NN (CNN) classifier architecture and LIDAR preprocessing, which significantly outperforms previous works in terms of both the performance and the complexity
Deep Learning at the Physical Layer: System Challenges and Applications to 5G and Beyond
The unprecedented requirements of the Internet of Things (IoT) have made
fine-grained optimization of spectrum resources an urgent necessity. Thus,
designing techniques able to extract knowledge from the spectrum in real time
and select the optimal spectrum access strategy accordingly has become more
important than ever. Moreover, 5G and beyond (5GB) networks will require
complex management schemes to deal with problems such as adaptive beam
management and rate selection. Although deep learning (DL) has been successful
in modeling complex phenomena, commercially-available wireless devices are
still very far from actually adopting learning-based techniques to optimize
their spectrum usage. In this paper, we first discuss the need for real-time DL
at the physical layer, and then summarize the current state of the art and
existing limitations. We conclude the paper by discussing an agenda of research
challenges and how DL can be applied to address crucial problems in 5GB
networks.Comment: Accepted for publication in IEEE Communications Magazin
Terahertz Communications and Sensing for 6G and Beyond: A Comprehensive View
The next-generation wireless technologies, commonly referred to as the sixth
generation (6G), are envisioned to support extreme communications capacity and
in particular disruption in the network sensing capabilities. The terahertz
(THz) band is one potential enabler for those due to the enormous unused
frequency bands and the high spatial resolution enabled by both short
wavelengths and bandwidths. Different from earlier surveys, this paper presents
a comprehensive treatment and technology survey on THz communications and
sensing in terms of the advantages, applications, propagation characterization,
channel modeling, measurement campaigns, antennas, transceiver devices,
beamforming, networking, the integration of communications and sensing, and
experimental testbeds. Starting from the motivation and use cases, we survey
the development and historical perspective of THz communications and sensing
with the anticipated 6G requirements. We explore the radio propagation, channel
modeling, and measurements for THz band. The transceiver requirements,
architectures, technological challenges, and approaches together with means to
compensate for the high propagation losses by appropriate antenna and
beamforming solutions. We survey also several system technologies required by
or beneficial for THz systems. The synergistic design of sensing and
communications is explored with depth. Practical trials, demonstrations, and
experiments are also summarized. The paper gives a holistic view of the current
state of the art and highlights the issues and challenges that are open for
further research towards 6G.Comment: 55 pages, 10 figures, 8 tables, submitted to IEEE Communications
Surveys & Tutorial
Deep Learning and Gaussian Process based Band Assignment in Dual Band Systems
We consider the band assignment (BA) problem in dual-band systems, where the
basestation (BS) chooses one of the two available frequency bands
(centimeter-wave and millimeter-wave bands) to communicate with the user
equipment (UE). While the millimeter-wave band might offer higher data rate,
there is a significant probability of outage during which the communication
should be carried on the (more reliable) centimeter-wave band. We consider two
variations of the BA problem, one-shot and sequential BA. For the former the BS
uses only the currently observed information to decide whether to switch to the
other frequency band, for the sequential BA, the BS uses a window of previously
observed information to predict the best band for a future time step. We
provide two approaches to solve the BA problem, (i) a deep learning approach
that is based on Long Short Term Memory and/or multi-layer Neural Networks, and
(ii) a Gaussian Process based approach, which relies on the assumption that the
channel states are jointly Gaussian. We compare the achieved performances to
several benchmarks in two environments: (i) a stochastic environment, and (ii)
microcellular outdoor channels obtained by ray-tracing. In general, the deep
learning solution shows superior performance in both environments.Comment: 30 pages, 6 figures, 8 table
- …