906 research outputs found
User Association and Load Balancing for Massive MIMO through Deep Learning
This work investigates the use of deep learning to perform user cell
association for sum-rate maximization in Massive MIMO networks. It is shown how
a deep neural network can be trained to approach the optimal association rule
with a much more limited computational complexity, thus enabling to update the
association rule in real-time, on the basis of the mobility patterns of users.
In particular, the proposed neural network design requires as input only the
users' geographical positions. Numerical results show that it guarantees the
same performance of traditional optimization-oriented methods
Seeing the Unobservable: Channel Learning for Wireless Communication Networks
Wireless communication networks rely heavily on channel state information
(CSI) to make informed decision for signal processing and network operations.
However, the traditional CSI acquisition methods is facing many difficulties:
pilot-aided channel training consumes a great deal of channel resources and
reduces the opportunities for energy saving, while location-aided channel
estimation suffers from inaccurate and insufficient location information. In
this paper, we propose a novel channel learning framework, which can tackle
these difficulties by inferring unobservable CSI from the observable one. We
formulate this framework theoretically and illustrate a special case in which
the learnability of the unobservable CSI can be guaranteed. Possible
applications of channel learning are then described, including cell selection
in multi-tier networks, device discovery for device-to-device (D2D)
communications, as well as end-to-end user association for load balancing. We
also propose a neuron-network-based algorithm for the cell selection problem in
multi-tier networks. The performance of this algorithm is evaluated using
geometry-based stochastic channel model (GSCM). In settings with 5 small cells,
the average cell-selection accuracy is 73% - only a 3.9% loss compared with a
location-aided algorithm which requires genuine location information.Comment: 6 pages, 4 figures, accepted by GlobeCom'1
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
- …