26 research outputs found

    MIMO Channel Information Feedback Using Deep Recurrent Network

    Get PDF
    In a multiple-input multiple-output (MIMO) system, the availability of channel state information (CSI) at the transmitter is essential for performance improvement. Recent convolutional neural network (NN) based techniques show competitive ability in realizing CSI compression and feedback. By introducing a new NN architecture, we enhance the accuracy of quantized CSI feedback in MIMO communications. The proposed NN architecture invokes a module named long short-term memory (LSTM) which admits the NN to benefit from exploiting temporal and frequency correlations of wireless channels. Compromising performance with complexity, we further modify the NN architecture with a significantly reduced number of parameters to be trained. Finally, experiments show that the proposed NN architectures achieve better performance in terms of both CSI compression and recovery accuracy

    MIMO Channel Information Feedback Using Deep Recurrent Network

    Get PDF
    In a multiple-input multiple-output (MIMO) system, the availability of channel state information (CSI) at the transmitter is essential for performance improvement. Recent convolutional neural network (NN) based techniques show competitive ability in realizing CSI compression and feedback. By introducing a new NN architecture, we enhance the accuracy of quantized CSI feedback in MIMO communications. The proposed NN architecture invokes a module named long short-term memory (LSTM) which admits the NN to benefit from exploiting temporal and frequency correlations of wireless channels. Compromising performance with complexity, we further modify the NN architecture with a significantly reduced number of parameters to be trained. Finally, experiments show that the proposed NN architectures achieve better performance in terms of both CSI compression and recovery accuracy

    Deep Learning-based Limited Feedback Designs for MIMO Systems

    Full text link
    We study a deep learning (DL) based limited feedback methods for multi-antenna systems. Deep neural networks (DNNs) are introduced to replace an end-to-end limited feedback procedure including pilot-aided channel training process, channel codebook design, and beamforming vector selection. The DNNs are trained to yield binary feedback information as well as an efficient beamforming vector which maximizes the effective channel gain. Compared to conventional limited feedback schemes, the proposed DL method shows an 1 dB symbol error rate (SER) gain with reduced computational complexity.Comment: to appear in IEEE Wireless Commun. Let

    Secure Massive MIMO Communication with Low-resolution DACs

    Full text link
    In this paper, we investigate secure transmission in a massive multiple-input multiple-output (MIMO) system adopting low-resolution digital-to-analog converters (DACs). Artificial noise (AN) is deliberately transmitted simultaneously with the confidential signals to degrade the eavesdropper's channel quality. By applying the Bussgang theorem, a DAC quantization model is developed which facilitates the analysis of the asymptotic achievable secrecy rate. Interestingly, for a fixed power allocation factor ϕ\phi, low-resolution DACs typically result in a secrecy rate loss, but in certain cases they provide superior performance, e.g., at low signal-to-noise ratio (SNR). Specifically, we derive a closed-form SNR threshold which determines whether low-resolution or high-resolution DACs are preferable for improving the secrecy rate. Furthermore, a closed-form expression for the optimal ϕ\phi is derived. With AN generated in the null-space of the user channel and the optimal ϕ\phi, low-resolution DACs inevitably cause secrecy rate loss. On the other hand, for random AN with the optimal ϕ\phi, the secrecy rate is hardly affected by the DAC resolution because the negative impact of the quantization noise can be compensated for by reducing the AN power. All the derived analytical results are verified by numerical simulations.Comment: 14 pages, 10 figure
    corecore