27 research outputs found

    Precoded PRML, serial concatenation, and iterative (turbo) decoding for digital magnetic recording

    No full text
    We show that a high rate serially concatenated code used in conjunction with a precoded partial response equalized magnetic recording channel and iterative decoding can provide similar performance to a turbo code on the same channel. The precoded partial response maximum-likelihood (PRML) read channel is incorporated as the inner code of the concatenated coding scheme and the outer code is a high rate convolutional code. Gains of 4.8 dB above conventional PRML at a bit error rate of 10 \Gamma5 for a rate 13/14 code can be achieved. Index Terms---Iterative decoding, precoded PRML, serial concatenated codes, turbo codes. I. Introduction Recently [1], [2], [3], have shown that high rate turbo codes [4] can provide 4-6 dB of coding gain when applied to PR4 and EPR4 equalized magnetic recording. In this paper we show that this outstanding performance is achievable without the use of turbo codes per se. That is, we show that a high rate serially concatenated code used in conjunction wi..

    Integrating Language Models with Speech Recognition

    No full text
    The question of how to integrate language models with speech recognition systems is becoming more important as speech recognition technology matures. For the purposes of this paper, we have classified the level of integration of current and past approaches into three categories: tightly-coupled, loosely-coupled, or semicoupled systems. We then argue that loose coupling is more appropriate given the current state of the art and given that it allows one to measure more precisely which components of the language model are most important. We will detail how the speech component in our approach interacts with the language model and discuss why we chose our language model. 1 Introduction State of the art speech recognition systems achieve high recognition accuracies only on tasks that have low perplexities. The perplexity of a task is, roughly speaking, the average number of choices at any decision point. The perplexity of a task is at a minimum when the true language model is known and co..
    corecore