12 research outputs found

    Channel Detection and Decoding With Deep Learning

    Full text link
    In this thesis, we investigate the designs of pragmatic data detectors and channel decoders with the assistance of deep learning. We focus on three emerging and fundamental research problems, including the designs of message passing algorithms for data detection in faster-than-Nyquist (FTN) signalling, soft-decision decoding algorithms for high-density parity-check codes and user identification for massive machine-type communications (mMTC). These wireless communication research problems are addressed by the employment of deep learning and an outline of the main contributions are given below. In the first part, we study a deep learning-assisted sum-product detection algorithm for FTN signalling. The proposed data detection algorithm works on a modified factor graph which concatenates a neural network function node to the variable nodes of the conventional FTN factor graph to compensate any detrimental effects that degrade the detection performance. By investigating the maximum-likelihood bit-error rate performance of a finite length coded FTN system, we show that the error performance of the proposed algorithm approaches the maximum a posterior performance, which might not be approachable by employing the sum-product algorithm on conventional FTN factor graph. After investigating the deep learning-assisted message passing algorithm for data detection, we move to the design of an efficient channel decoder. Specifically, we propose a node-classified redundant decoding algorithm based on the received sequence’s channel reliability for Bose-Chaudhuri-Hocquenghem (BCH) codes. Two preprocessing steps are proposed prior to decoding, to mitigate the unreliable information propagation and to improve the decoding performance. On top of the preprocessing, we propose a list decoding algorithm to augment the decoder’s performance. Moreover, we show that the node-classified redundant decoding algorithm can be transformed into a neural network framework, where multiplicative tuneable weights are attached to the decoding messages to optimise the decoding performance. We show that the node-classified redundant decoding algorithm provides a performance gain compared to the random redundant decoding algorithm. Additional decoding performance gain can be obtained by both the list decoding method and the neural network “learned” node-classified redundant decoding algorithm. Finally, we consider one of the practical services provided by the fifth-generation (5G) wireless communication networks, mMTC. Two separate system models for mMTC are studied. The first model assumes that low-resolution digital-to-analog converters are equipped by the devices in mMTC. The second model assumes that the devices' activities are correlated. In the first system model, two rounds of signal recoveries are performed. A neural network is employed to identify a suspicious device which is most likely to be falsely alarmed during the first round of signal recovery. The suspicious device is enforced to be inactive in the second round of signal recovery. The proposed scheme can effectively combat the interference caused by the suspicious device and thus improve the user identification performance. In the second system model, two deep learning-assisted algorithms are proposed to exploit the user activity correlation to facilitate channel estimation and user identification. We propose a deep learning modified orthogonal approximate message passing algorithm to exploit the correlation structure among devices. In addition, we propose a neural network framework that is dedicated for the user identification. More specifically, the neural network aims to minimise the missed detection probability under a pre-determined false alarm probability. The proposed algorithms substantially reduce the mean squared error between the estimate and unknown sequence, and largely improve the trade-off between the missed detection probability and the false alarm probability compared to the conventional orthogonal approximate message passing algorithm. All the aforementioned three parts of research works demonstrate that deep learning is a powerful technology in the physical layer designs of wireless communications

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
    corecore