18 research outputs found

    A Deep Learning Framework for Optimization of MISO Downlink Beamforming

    Get PDF
    Beamforming is an effective means to improve the quality of the received signals in multiuser multiple-input-singleoutput (MISO) systems. Traditionally, finding the optimal beamforming solution relies on iterative algorithms, which introduces high computational delay and is thus not suitable for realtime implementation. In this paper, we propose a deep learning framework for the optimization of downlink beamforming. In particular, the solution is obtained based on convolutional neural networks and exploitation of expert knowledge, such as the uplink-downlink duality and the known structure of optimal solutions. Using this framework, we construct three beamforming neural networks (BNNs) for three typical optimization problems, i.e., the signal-to-interference-plus-noise ratio (SINR) balancing problem, the power minimization problem, and the sum rate maximization problem. For the former two problems the BNNs adopt the supervised learning approach, while for the sum rate maximization problem a hybrid method of supervised and unsupervised learning is employed. Simulation results show that the BNNs can achieve near-optimal solutions to the SINR balancing and power minimization problems, and a performance close to that of the weighted minimum mean squared error algorithm for the sum rate maximization problem, while in all cases enjoy significantly reduced computational complexity. In summary, this work paves the way for fast realization of optimal beamforming in multiuser MISO systems

    Multi-Agent Double Deep Q-Learning for Beamforming in mmWave MIMO Networks

    Full text link
    Beamforming is one of the key techniques in millimeter wave (mmWave) multi-input multi-output (MIMO) communications. Designing appropriate beamforming not only improves the quality and strength of the received signal, but also can help reduce the interference, consequently enhancing the data rate. In this paper, we propose a distributed multi-agent double deep Q-learning algorithm for beamforming in mmWave MIMO networks, where multiple base stations (BSs) can automatically and dynamically adjust their beams to serve multiple highly-mobile user equipments (UEs). In the analysis, largest received power association criterion is considered for UEs, and a realistic channel model is taken into account. Simulation results demonstrate that the proposed learning-based algorithm can achieve comparable performance with respect to exhaustive search while operating at much lower complexity.Comment: To be published in IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC) 202

    Neural-Network Optimized 1-bit Precoding for Massive MU-MIMO

    Full text link
    Base station (BS) architectures for massive multi-user (MU) multiple-input multiple-output (MIMO) wireless systems are equipped with hundreds of antennas to serve tens of users on the same time-frequency channel. The immense number of BS antennas incurs high system costs, power, and interconnect bandwidth. To circumvent these obstacles, sophisticated MU precoding algorithms that enable the use of 1-bit DACs have been proposed. Many of these precoders feature parameters that are, traditionally, tuned manually to optimize their performance. We propose to use deep-learning tools to automatically tune such 1-bit precoders. Specifically, we optimize the biConvex 1-bit PrecOding (C2PO) algorithm using neural networks. Compared to the original C2PO algorithm, our neural-network optimized (NNO-)C2PO achieves the same error-rate performance at 2×\bf 2\boldsymbol\times lower complexity. Moreover, by training NNO-C2PO for different channel models, we show that 1-bit precoding can be made robust to vastly changing propagation conditions

    RSSI-Based Hybrid Beamforming Design with Deep Learning

    Get PDF
    Hybrid beamforming is a promising technology for 5G millimetre-wave communications. However, its implementation is challenging in practical multiple-input multiple-output (MIMO) systems because non-convex optimization problems have to be solved, introducing additional latency and energy consumption. In addition, the channel-state information (CSI) must be either estimated from pilot signals or fed back through dedicated channels, introducing a large signaling overhead. In this paper, a hybrid precoder is designed based only on received signal strength indicator (RSSI) feedback from each user. A deep learning method is proposed to perform the associated optimization with reasonable complexity. Results demonstrate that the obtained sum-rates are very close to the ones obtained with full-CSI optimal but complex solutions. Finally, the proposed solution allows to greatly increase the spectral efficiency of the system when compared to existing techniques, as minimal CSI feedback is required.Comment: Published in IEEE-ICC202

    Model-Driven Beamforming Neural Networks

    Get PDF
    Beamforming is evidently a core technology in recent generations of mobile communication networks. Nevertheless, an iterative process is typically required to optimize the parameters, making it ill-placed for real-time implementation due to high complexity and computational delay. Heuristic solutions such as zero-forcing (ZF) are simpler but at the expense of performance loss. Alternatively, deep learning (DL) is well understood to be a generalizing technique that can deliver promising results for a wide range of applications at much lower complexity if it is sufficiently trained. As a consequence, DL may present itself as an attractive solution to beamforming. To exploit DL, this article introduces general data- and model-driven beamforming neural networks (BNNs), presents various possible learning strategies, and also discusses complexity reduction for the DL-based BNNs. We also offer enhancement methods such as training-set augmentation and transfer learning in order to improve the generality of BNNs, accompanied by computer simulation results and testbed results showing the performance of such BNN solutions

    6G White Paper on Machine Learning in Wireless Communication Networks

    Full text link
    The focus of this white paper is on machine learning (ML) in wireless communications. 6G wireless communication networks will be the backbone of the digital transformation of societies by providing ubiquitous, reliable, and near-instant wireless connectivity for humans and machines. Recent advances in ML research has led enable a wide range of novel technologies such as self-driving vehicles and voice assistants. Such innovation is possible as a result of the availability of advanced ML models, large datasets, and high computational power. On the other hand, the ever-increasing demand for connectivity will require a lot of innovation in 6G wireless networks, and ML tools will play a major role in solving problems in the wireless domain. In this paper, we provide an overview of the vision of how ML will impact the wireless communication systems. We first give an overview of the ML methods that have the highest potential to be used in wireless networks. Then, we discuss the problems that can be solved by using ML in various layers of the network such as the physical layer, medium access layer, and application layer. Zero-touch optimization of wireless networks using ML is another interesting aspect that is discussed in this paper. Finally, at the end of each section, important research questions that the section aims to answer are presented

    Machine Learning-Enabled Joint Antenna Selection and Precoding Design: From Offline Complexity to Online Performance

    Get PDF
    We investigate the performance of multi-user multiple-antenna downlink systems in which a BS serves multiple users via a shared wireless medium. In order to fully exploit the spatial diversity while minimizing the passive energy consumed by radio frequency (RF) components, the BS is equipped with M RF chains and N antennas, where M < N. Upon receiving pilot sequences to obtain the channel state information, the BS determines the best subset of M antennas for serving the users. We propose a joint antenna selection and precoding design (JASPD) algorithm to maximize the system sum rate subject to a transmit power constraint and QoS requirements. The JASPD overcomes the non-convexity of the formulated problem via a doubly iterative algorithm, in which an inner loop successively optimizes the precoding vectors, followed by an outer loop that tries all valid antenna subsets. Although approaching the (near) global optimality, the JASPD suffers from a combinatorial complexity, which may limit its application in real-time network operations. To overcome this limitation, we propose a learning-based antenna selection and precoding design algorithm (L-ASPA), which employs a DNN to establish underlaying relations between the key system parameters and the selected antennas. The proposed L-ASPD is robust against the number of users and their locations, BS's transmit power, as well as the small-scale channel fading. With a well-trained learning model, it is shown that the L-ASPD significantly outperforms baseline schemes based on the block diagonalization and a learning-assisted solution for broadcasting systems and achieves higher effective sum rate than that of the JASPA under limited processing time. In addition, we observed that the proposed L-ASPD can reduce the computation complexity by 95% while retaining more than 95% of the optimal performance.Comment: accepted to the IEEE Transactions on Wireless Communication
    corecore