1,236 research outputs found

    LEARN Codes: Inventing Low-latency Codes via Recurrent Neural Networks

    Full text link
    Designing channel codes under low-latency constraints is one of the most demanding requirements in 5G standards. However, a sharp characterization of the performance of traditional codes is available only in the large block-length limit. Guided by such asymptotic analysis, code designs require large block lengths as well as latency to achieve the desired error rate. Tail-biting convolutional codes and other recent state-of-the-art short block codes, while promising reduced latency, are neither robust to channel-mismatch nor adaptive to varying channel conditions. When the codes designed for one channel (e.g.,~Additive White Gaussian Noise (AWGN) channel) are used for another (e.g.,~non-AWGN channels), heuristics are necessary to achieve non-trivial performance. In this paper, we first propose an end-to-end learned neural code, obtained by jointly designing a Recurrent Neural Network (RNN) based encoder and decoder. This code outperforms canonical convolutional code under block settings. We then leverage this experience to propose a new class of codes under low-latency constraints, which we call Low-latency Efficient Adaptive Robust Neural (LEARN) codes. These codes outperform state-of-the-art low-latency codes and exhibit robustness and adaptivity properties. LEARN codes show the potential to design new versatile and universal codes for future communications via tools of modern deep learning coupled with communication engineering insights

    Geometric Constellation Shaping for Fiber Optic Communication Systems via End-to-end Learning

    Full text link
    In this paper, an unsupervised machine learning method for geometric constellation shaping is investigated. By embedding a differentiable fiber channel model within two neural networks, the learning algorithm is optimizing for a geometric constellation shape. The learned constellations yield improved performance to state-of-the-art geometrically shaped constellations, and include an implicit trade-off between amplification noise and nonlinear effects. Further, the method allows joint optimization of system parameters, such as the optimal launch power, simultaneously with the constellation shape. An experimental demonstration validates the findings. Improved performances are reported, up to 0.13 bit/4D in simulation and experimentally up to 0.12 bit/4D

    Lightweight Convolutional Neural Networks for CSI Feedback in Massive MIMO

    Full text link
    In frequency division duplex mode of massive multiple-input multiple-output systems, the downlink channel state information (CSI) must be sent to the base station (BS) through a feedback link. However, transmitting CSI to the BS is costly due to the bandwidth limitation of the feedback link. Deep learning (DL) has recently achieved remarkable success in CSI feedback. Realizing high-performance and low-complexity CSI feedback is a challenge in DL based communication. We develop a DL based CSI feedback network in this study to complete the feedback of CSI effectively. However, this network cannot be effectively applied to the mobile terminal because of the excessive numbers of parameters. Therefore, we further propose a new lightweight CSI feedback network based on the developed network. Simulation results show that the proposed CSI network exhibits better reconstruction performance than that of other CsiNet-related works. Moreover, the lightweight network maintains a few parameters and parameter complexity while ensuring satisfactory reconstruction performance. These findings suggest the feasibility and potential of the proposed techniques.Comment: 5 pages, 2 figures, 2 table

    Joint Transceiver Optimization for Wireless Communication PHY with Convolutional Neural Network

    Full text link
    Deep Learning has a wide application in the area of natural language processing and image processing due to its strong ability of generalization. In this paper, we propose a novel neural network structure for jointly optimizing the transmitter and receiver in communication physical layer under fading channels. We build up a convolutional autoencoder to simultaneously conduct the role of modulation, equalization and demodulation. The proposed system is able to design different mapping scheme from input bit sequences of arbitrary length to constellation symbols according to different channel environments. The simulation results show that the performance of neural network based system is superior to traditional modulation and equalization methods in terms of time complexity and bit error rate (BER) under fading channels. The proposed system can also be combined with other coding techniques to further improve the performance. Furthermore, the proposed system network is more robust to channel variation than traditional communication methods

    Deep Learning for Wireless Communications

    Full text link
    Existing communication systems exhibit inherent limitations in translating theory to practice when handling the complexity of optimization for emerging wireless applications with high degrees of freedom. Deep learning has a strong potential to overcome this challenge via data-driven solutions and improve the performance of wireless systems in utilizing limited spectrum resources. In this chapter, we first describe how deep learning is used to design an end-to-end communication system using autoencoders. This flexible design effectively captures channel impairments and optimizes transmitter and receiver operations jointly in single-antenna, multiple-antenna, and multiuser communications. Next, we present the benefits of deep learning in spectrum situation awareness ranging from channel modeling and estimation to signal detection and classification tasks. Deep learning improves the performance when the model-based methods fail. Finally, we discuss how deep learning applies to wireless communication security. In this context, adversarial machine learning provides novel means to launch and defend against wireless attacks. These applications demonstrate the power of deep learning in providing novel means to design, optimize, adapt, and secure wireless communications

    Machine Learning for Wireless Communications in the Internet of Things: A Comprehensive Survey

    Full text link
    The Internet of Things (IoT) is expected to require more effective and efficient wireless communications than ever before. For this reason, techniques such as spectrum sharing, dynamic spectrum access, extraction of signal intelligence and optimized routing will soon become essential components of the IoT wireless communication paradigm. Given that the majority of the IoT will be composed of tiny, mobile, and energy-constrained devices, traditional techniques based on a priori network optimization may not be suitable, since (i) an accurate model of the environment may not be readily available in practical scenarios; (ii) the computational requirements of traditional optimization techniques may prove unbearable for IoT devices. To address the above challenges, much research has been devoted to exploring the use of machine learning to address problems in the IoT wireless communications domain. This work provides a comprehensive survey of the state of the art in the application of machine learning techniques to address key problems in IoT wireless communications with an emphasis on its ad hoc networking aspect. First, we present extensive background notions of machine learning techniques. Then, by adopting a bottom-up approach, we examine existing work on machine learning for the IoT at the physical, data-link and network layer of the protocol stack. Thereafter, we discuss directions taken by the community towards hardware implementation to ensure the feasibility of these techniques. Additionally, before concluding, we also provide a brief discussion of the application of machine learning in IoT beyond wireless communication. Finally, each of these discussions is accompanied by a detailed analysis of the related open problems and challenges.Comment: Ad Hoc Networks Journa

    Blind Channel Equalization using Variational Autoencoders

    Full text link
    A new maximum likelihood estimation approach for blind channel equalization, using variational autoencoders (VAEs), is introduced. Significant and consistent improvements in the error rate of the reconstructed symbols, compared to constant modulus equalizers, are demonstrated. In fact, for the channels that were examined, the performance of the new VAE blind channel equalizer was close to the performance of a nonblind adaptive linear minimum mean square error equalizer. The new equalization method enables a significantly lower latency channel acquisition compared to the constant modulus algorithm (CMA). The VAE uses a convolutional neural network with two layers and a very small number of free parameters. Although the computational complexity of the new equalizer is higher compared to CMA, it is still reasonable, and the number of free parameters to estimate is small.Comment: Accepted to ICC workshop, Promises and Challenges of Machine Learning in Communication Networks (ML4COM), 201

    Deep Learning Based MIMO Communications

    Full text link
    We introduce a novel physical layer scheme for single user Multiple-Input Multiple-Output (MIMO) communications based on unsupervised deep learning using an autoencoder. This method extends prior work on the joint optimization of physical layer representation and encoding and decoding processes as a single end-to-end task by expanding transmitter and receivers to the multi-antenna case. We introduce a widely used domain appropriate wireless channel impairment model (Rayleigh fading channel), into the autoencoder optimization problem in order to directly learn a system which optimizes for it. We considered both spatial diversity and spatial multiplexing techniques in our implementation. Our deep learning-based approach demonstrates significant potential for learning schemes which approach and exceed the performance of the methods which are widely used in existing wireless MIMO systems. We discuss how the proposed scheme can be easily adapted for open-loop and closed-loop operation in spatial diversity and multiplexing modes and extended use with only compact binary channel state information (CSI) as feedback.Comment: under journal submissio

    Deep Learning-Based Communication Over the Air

    Full text link
    End-to-end learning of communications systems is a fascinating novel concept that has so far only been validated by simulations for block-based transmissions. It allows learning of transmitter and receiver implementations as deep neural networks (NNs) that are optimized for an arbitrary differentiable end-to-end performance metric, e.g., block error rate (BLER). In this paper, we demonstrate that over-the-air transmissions are possible: We build, train, and run a complete communications system solely composed of NNs using unsynchronized off-the-shelf software-defined radios (SDRs) and open-source deep learning (DL) software libraries. We extend the existing ideas towards continuous data transmission which eases their current restriction to short block lengths but also entails the issue of receiver synchronization. We overcome this problem by introducing a frame synchronization module based on another NN. A comparison of the BLER performance of the "learned" system with that of a practical baseline shows competitive performance close to 1 dB, even without extensive hyperparameter tuning. We identify several practical challenges of training such a system over actual channels, in particular the missing channel gradient, and propose a two-step learning procedure based on the idea of transfer learning that circumvents this issue

    One-Bit OFDM Receivers via Deep Learning

    Full text link
    This paper develops novel deep learning-based architectures and design methodologies for an orthogonal frequency division multiplexing (OFDM) receiver under the constraint of one-bit complex quantization. Single bit quantization greatly reduces complexity and power consumption, but makes accurate channel estimation and data detection difficult. This is particularly true for multicarrier waveforms, which have high peak-to-average ratio in the time domain and fragile subcarrier orthogonality in the frequency domain. The severe distortion for one-bit quantization typically results in an error floor even at moderately low signal-to-noise-ratio (SNR) such as 5 dB. For channel estimation (using pilots), we design a novel generative supervised deep neural network (DNN) that can be trained with a reasonable number of pilots. After channel estimation, a neural network-based receiver -- specifically, an autoencoder -- jointly learns a precoder and decoder for data symbol detection. Since quantization prevents end-to-end training, we propose a two-step sequential training policy for this model. With synthetic data, our deep learning-based channel estimation can outperform least squares (LS) channel estimation for unquantized (full-resolution) OFDM at average SNRs up to 14 dB. For data detection, our proposed design achieves lower bit error rate (BER) in fading than unquantized OFDM at average SNRs up to 10 dB
    corecore