2,773 research outputs found

    Statistical Mechanics Analysis of LDPC Coding in MIMO Gaussian Channels

    Get PDF
    Using analytical methods of statistical mechanics, we analyse the typical behaviour of a multiple-input multiple-output (MIMO) Gaussian channel with binary inputs under LDPC network coding and joint decoding. The saddle point equations for the replica symmetric solution are found in particular realizations of this channel, including a small and large number of transmitters and receivers. In particular, we examine the cases of a single transmitter, a single receiver and the symmetric and asymmetric interference channels. Both dynamical and thermodynamical transitions from the ferromagnetic solution of perfect decoding to a non-ferromagnetic solution are identified for the cases considered, marking the practical and theoretical limits of the system under the current coding scheme. Numerical results are provided, showing the typical level of improvement/deterioration achieved with respect to the single transmitter/receiver result, for the various cases.Comment: 25 pages, 7 figure

    Self-concatenated code design and its application in power-efficient cooperative communications

    No full text
    In this tutorial, we have focused on the design of binary self-concatenated coding schemes with the help of EXtrinsic Information Transfer (EXIT) charts and Union bound analysis. The design methodology of future iteratively decoded self-concatenated aided cooperative communication schemes is presented. In doing so, we will identify the most important milestones in the area of channel coding, concatenated coding schemes and cooperative communication systems till date and suggest future research directions

    ENHANCEMENT OF ITERATIVE TURBO DECODING FOR HARQ SYSTEMS

    Get PDF
    This paper presents a new method for stopping the iterative turbo decoding. First, a bit-level convergence test using the cross-entropy analyses is used to select non converged bits and establish a simple and effective stopping rule. Next, an adaptive approach is used to compute a scaling factor for normalizing the extrinsic information of the previously selected bits. The extra coding gain obtained from this normalization can compensate for the performance degradation of the stopping rule. The simulation results of the proposed stopping criterion show an interesting application in a hybrid automatic repeat request systems with turbo coding scheme, where the decoding complexity can be fairly reduced. Simulation results of the proposed criterion, in comparison with previously published stopping rules, were presented for illustrating the adaptive termination according to a changing SNR environment

    Statistical models for noise-robust speech recognition

    Get PDF
    A standard way of improving the robustness of speech recognition systems to noise is model compensation. This replaces a speech recogniser's distributions over clean speech by ones over noise-corrupted speech. For each clean speech component, model compensation techniques usually approximate the corrupted speech distribution with a diagonal-covariance Gaussian distribution. This thesis looks into improving on this approximation in two ways: firstly, by estimating full-covariance Gaussian distributions; secondly, by approximating corrupted-speech likelihoods without any parameterised distribution. The first part of this work is about compensating for within-component feature correlations under noise. For this, the covariance matrices of the computed Gaussians should be full instead of diagonal. The estimation of off-diagonal covariance elements turns out to be sensitive to approximations. A popular approximation is the one that state-of-the-art compensation schemes, like VTS compensation, use for dynamic coefficients: the continuous-time approximation. Standard speech recognisers contain both per-time slice, static, coefficients, and dynamic coefficients, which represent signal changes over time, and are normally computed from a window of static coefficients. To remove the need for the continuous-time approximation, this thesis introduces a new technique. It first compensates a distribution over the window of statics, and then applies the same linear projection that extracts dynamic coefficients. It introduces a number of methods that address the correlation changes that occur in noise within this framework. The next problem is decoding speed with full covariances. This thesis re-analyses the previously-introduced predictive linear transformations, and shows how they can model feature correlations at low and tunable computational cost. The second part of this work removes the Gaussian assumption completely. It introduces a sampling method that, given speech and noise distributions and a mismatch function, in the limit calculates the corrupted speech likelihood exactly. For this, it transforms the integral in the likelihood expression, and then applies sequential importance resampling. Though it is too slow to use for recognition, it enables a more fine-grained assessment of compensation techniques, based on the KL divergence to the ideal compensation for one component. The KL divergence proves to predict the word error rate well. This technique also makes it possible to evaluate the impact of approximations that standard compensation schemes make.This work was supported by Toshiba Research Europe Ltd., Cambridge Research Laboratory

    GeneFormer: Learned Gene Compression using Transformer-based Context Modeling

    Full text link
    With the development of gene sequencing technology, an explosive growth of gene data has been witnessed. And the storage of gene data has become an important issue. Traditional gene data compression methods rely on general software like G-zip, which fails to utilize the interrelation of nucleotide sequence. Recently, many researchers begin to investigate deep learning based gene data compression method. In this paper, we propose a transformer-based gene compression method named GeneFormer. Specifically, we first introduce a modified transformer structure to fully explore the nucleotide sequence dependency. Then, we propose fixed-length parallel grouping to accelerate the decoding speed of our autoregressive model. Experimental results on real-world datasets show that our method saves 29.7% bit rate compared with the state-of-the-art method, and the decoding speed is significantly faster than all existing learning-based gene compression methods
    corecore