8,317 research outputs found

    Near-Field Channel Estimation for Extremely Large-Scale Array Communications: A model-based deep learning approach

    Full text link
    Extremely large-scale massive MIMO (XL-MIMO) has been reviewed as a promising technology for future wireless communications. The deployment of XL-MIMO, especially at high-frequency bands, leads to users being located in the near-field region instead of the conventional far-field. This letter proposes efficient model-based deep learning algorithms for estimating the near-field wireless channel of XL-MIMO communications. In particular, we first formulate the XL-MIMO near-field channel estimation task as a compressed sensing problem using the spatial gridding-based sparsifying dictionary, and then solve the resulting problem by applying the Learning Iterative Shrinkage and Thresholding Algorithm (LISTA). Due to the near-field characteristic, the spatial gridding-based sparsifying dictionary may result in low channel estimation accuracy and a heavy computational burden. To address this issue, we further propose a new sparsifying dictionary learning-LISTA (SDL-LISTA) algorithm that formulates the sparsifying dictionary as a neural network layer and embeds it into LISTA neural network. The numerical results show that our proposed algorithms outperform non-learning benchmark schemes, and SDL-LISTA achieves better performance than LISTA with ten times atoms reduction.Comment: 4 pages, 5 figure

    Deep Unfolded Simulated Bifurcation for Massive MIMO Signal Detection

    Full text link
    Multiple-input multiple-output (MIMO) is a key ingredient of next-generation wireless communications. Recently, various MIMO signal detectors based on deep learning techniques and quantum(-inspired) algorithms have been proposed to improve the detection performance compared with conventional detectors. This paper focuses on the simulated bifurcation (SB) algorithm, a quantum-inspired algorithm. This paper proposes two techniques to improve its detection performance. The first is modifying the algorithm inspired by the Levenberg-Marquardt algorithm to eliminate local minima of maximum likelihood detection. The second is the use of deep unfolding, a deep learning technique to train the internal parameters of an iterative algorithm. We propose a deep-unfolded SB by making the update rule of SB differentiable. The numerical results show that these proposed detectors significantly improve the signal detection performance in massive MIMO systems.Comment: 5pages, 4 figure

    Deep Learning for Physical-Layer 5G Wireless Techniques: Opportunities, Challenges and Solutions

    Get PDF
    The new demands for high-reliability and ultra-high capacity wireless communication have led to extensive research into 5G communications. However, the current communication systems, which were designed on the basis of conventional communication theories, signficantly restrict further performance improvements and lead to severe limitations. Recently, the emerging deep learning techniques have been recognized as a promising tool for handling the complicated communication systems, and their potential for optimizing wireless communications has been demonstrated. In this article, we first review the development of deep learning solutions for 5G communication, and then propose efficient schemes for deep learning-based 5G scenarios. Specifically, the key ideas for several important deep learningbased communication methods are presented along with the research opportunities and challenges. In particular, novel communication frameworks of non-orthogonal multiple access (NOMA), massive multiple-input multiple-output (MIMO), and millimeter wave (mmWave) are investigated, and their superior performances are demonstrated. We vision that the appealing deep learning-based wireless physical layer frameworks will bring a new direction in communication theories and that this work will move us forward along this road.Comment: Submitted a possible publication to IEEE Wireless Communications Magazin

    Multi-Agent Double Deep Q-Learning for Beamforming in mmWave MIMO Networks

    Full text link
    Beamforming is one of the key techniques in millimeter wave (mmWave) multi-input multi-output (MIMO) communications. Designing appropriate beamforming not only improves the quality and strength of the received signal, but also can help reduce the interference, consequently enhancing the data rate. In this paper, we propose a distributed multi-agent double deep Q-learning algorithm for beamforming in mmWave MIMO networks, where multiple base stations (BSs) can automatically and dynamically adjust their beams to serve multiple highly-mobile user equipments (UEs). In the analysis, largest received power association criterion is considered for UEs, and a realistic channel model is taken into account. Simulation results demonstrate that the proposed learning-based algorithm can achieve comparable performance with respect to exhaustive search while operating at much lower complexity.Comment: To be published in IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC) 202
    • …
    corecore