22 research outputs found

    Jointly Sparse Support Recovery via Deep Auto-encoder with Applications in MIMO-based Grant-Free Random Access for mMTC

    Full text link
    In this paper, a data-driven approach is proposed to jointly design the common sensing (measurement) matrix and jointly support recovery method for complex signals, using a standard deep auto-encoder for real numbers. The auto-encoder in the proposed approach includes an encoder that mimics the noisy linear measurement process for jointly sparse signals with a common sensing matrix, and a decoder that approximately performs jointly sparse support recovery based on the empirical covariance matrix of noisy linear measurements. The proposed approach can effectively utilize the feature of common support and properties of sparsity patterns to achieve high recovery accuracy, and has significantly shorter computation time than existing methods. We also study an application example, i.e., device activity detection in Multiple-Input Multiple-Output (MIMO)-based grant-free random access for massive machine type communications (mMTC). The numerical results show that the proposed approach can provide pilot sequences and device activity detection with better detection accuracy and substantially shorter computation time than well-known recovery methods.Comment: 5 pages, 8 figures, to be publised in IEEE SPAWC 2020. arXiv admin note: text overlap with arXiv:2002.0262

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Signal Processing and Learning for Next Generation Multiple Access in 6G

    Full text link
    Wireless communication systems to date primarily rely on the orthogonality of resources to facilitate the design and implementation, from user access to data transmission. Emerging applications and scenarios in the sixth generation (6G) wireless systems will require massive connectivity and transmission of a deluge of data, which calls for more flexibility in the design concept that goes beyond orthogonality. Furthermore, recent advances in signal processing and learning have attracted considerable attention, as they provide promising approaches to various complex and previously intractable problems of signal processing in many fields. This article provides an overview of research efforts to date in the field of signal processing and learning for next-generation multiple access, with an emphasis on massive random access and non-orthogonal multiple access. The promising interplay with new technologies and the challenges in learning-based NGMA are discussed

    Cellular, Wide-Area, and Non-Terrestrial IoT: A Survey on 5G Advances and the Road Towards 6G

    Full text link
    The next wave of wireless technologies is proliferating in connecting things among themselves as well as to humans. In the era of the Internet of things (IoT), billions of sensors, machines, vehicles, drones, and robots will be connected, making the world around us smarter. The IoT will encompass devices that must wirelessly communicate a diverse set of data gathered from the environment for myriad new applications. The ultimate goal is to extract insights from this data and develop solutions that improve quality of life and generate new revenue. Providing large-scale, long-lasting, reliable, and near real-time connectivity is the major challenge in enabling a smart connected world. This paper provides a comprehensive survey on existing and emerging communication solutions for serving IoT applications in the context of cellular, wide-area, as well as non-terrestrial networks. Specifically, wireless technology enhancements for providing IoT access in fifth-generation (5G) and beyond cellular networks, and communication networks over the unlicensed spectrum are presented. Aligned with the main key performance indicators of 5G and beyond 5G networks, we investigate solutions and standards that enable energy efficiency, reliability, low latency, and scalability (connection density) of current and future IoT networks. The solutions include grant-free access and channel coding for short-packet communications, non-orthogonal multiple access, and on-device intelligence. Further, a vision of new paradigm shifts in communication networks in the 2030s is provided, and the integration of the associated new technologies like artificial intelligence, non-terrestrial networks, and new spectra is elaborated. Finally, future research directions toward beyond 5G IoT networks are pointed out.Comment: Submitted for review to IEEE CS&

    Channel Detection and Decoding With Deep Learning

    Full text link
    In this thesis, we investigate the designs of pragmatic data detectors and channel decoders with the assistance of deep learning. We focus on three emerging and fundamental research problems, including the designs of message passing algorithms for data detection in faster-than-Nyquist (FTN) signalling, soft-decision decoding algorithms for high-density parity-check codes and user identification for massive machine-type communications (mMTC). These wireless communication research problems are addressed by the employment of deep learning and an outline of the main contributions are given below. In the first part, we study a deep learning-assisted sum-product detection algorithm for FTN signalling. The proposed data detection algorithm works on a modified factor graph which concatenates a neural network function node to the variable nodes of the conventional FTN factor graph to compensate any detrimental effects that degrade the detection performance. By investigating the maximum-likelihood bit-error rate performance of a finite length coded FTN system, we show that the error performance of the proposed algorithm approaches the maximum a posterior performance, which might not be approachable by employing the sum-product algorithm on conventional FTN factor graph. After investigating the deep learning-assisted message passing algorithm for data detection, we move to the design of an efficient channel decoder. Specifically, we propose a node-classified redundant decoding algorithm based on the received sequence’s channel reliability for Bose-Chaudhuri-Hocquenghem (BCH) codes. Two preprocessing steps are proposed prior to decoding, to mitigate the unreliable information propagation and to improve the decoding performance. On top of the preprocessing, we propose a list decoding algorithm to augment the decoder’s performance. Moreover, we show that the node-classified redundant decoding algorithm can be transformed into a neural network framework, where multiplicative tuneable weights are attached to the decoding messages to optimise the decoding performance. We show that the node-classified redundant decoding algorithm provides a performance gain compared to the random redundant decoding algorithm. Additional decoding performance gain can be obtained by both the list decoding method and the neural network “learned” node-classified redundant decoding algorithm. Finally, we consider one of the practical services provided by the fifth-generation (5G) wireless communication networks, mMTC. Two separate system models for mMTC are studied. The first model assumes that low-resolution digital-to-analog converters are equipped by the devices in mMTC. The second model assumes that the devices' activities are correlated. In the first system model, two rounds of signal recoveries are performed. A neural network is employed to identify a suspicious device which is most likely to be falsely alarmed during the first round of signal recovery. The suspicious device is enforced to be inactive in the second round of signal recovery. The proposed scheme can effectively combat the interference caused by the suspicious device and thus improve the user identification performance. In the second system model, two deep learning-assisted algorithms are proposed to exploit the user activity correlation to facilitate channel estimation and user identification. We propose a deep learning modified orthogonal approximate message passing algorithm to exploit the correlation structure among devices. In addition, we propose a neural network framework that is dedicated for the user identification. More specifically, the neural network aims to minimise the missed detection probability under a pre-determined false alarm probability. The proposed algorithms substantially reduce the mean squared error between the estimate and unknown sequence, and largely improve the trade-off between the missed detection probability and the false alarm probability compared to the conventional orthogonal approximate message passing algorithm. All the aforementioned three parts of research works demonstrate that deep learning is a powerful technology in the physical layer designs of wireless communications

    MLE-based Device Activity Detection under Rician Fading for Massive Grant-free Access with Perfect and Imperfect Synchronization

    Full text link
    Most existing studies on massive grant-free access, proposed to support massive machine-type communications (mMTC) for the Internet of things (IoT), assume Rayleigh fading and perfect synchronization for simplicity. However, in practice, line-of-sight (LoS) components generally exist, and time and frequency synchronization are usually imperfect. This paper systematically investigates maximum likelihood estimation (MLE)-based device activity detection under Rician fading for massive grant-free access with perfect and imperfect synchronization. We assume that the large-scale fading powers, Rician factors, and normalized LoS components can be estimated offline. We formulate device activity detection in the synchronous case and joint device activity and offset detection in three asynchronous cases (i.e., time, frequency, and time and frequency asynchronous cases) as MLE problems. In the synchronous case, we propose an iterative algorithm to obtain a stationary point of the MLE problem. In each asynchronous case, we propose two iterative algorithms with identical detection performance but different computational complexities. In particular, one is computationally efficient for small ranges of offsets, whereas the other one, relying on fast Fourier transform (FFT) and inverse FFT, is computationally efficient for large ranges of offsets. The proposed algorithms generalize the existing MLE-based methods for Rayleigh fading and perfect synchronization. Numerical results show that the proposed algorithm for the synchronous case can reduce the detection error probability by up to 50.4% at a 78.6% computation time increase, compared to the MLEbased state-of-the-art, and the proposed algorithms for the three asynchronous cases can reduce the detection error probabilities and computation times by up to 65.8% and 92.0%, respectively, compared to the MLE-based state-of-the-arts
    corecore