2 research outputs found

    About Combining Forward and Backward-Based Decoders for Selecting Data for Unsupervised Training of Acoustic Models

    Get PDF
    International audienceThis paper introduces the combination of speech decoders for selecting automatically transcribed speech data for unsupervised training or adaptation of acoustic models. Here, the combination relies on the use of a forward-based and a backward-based decoder. Best performance is achieved when selecting automatically transcribed data (speech segments) that have the same word hypotheses when processed by the Sphinx forward-based and the Julius backward-based transcription systems, and this selection process outperforms confidence measure based selection. Results are reported and discussed for adaptation and for full training from scratch, using data resulting from various selection processes, whether alone or in addition to the baseline manually transcribed data. Overall, selecting automatically transcribed speech segments that have the same word hypotheses when processed by the Sphinx forward-based and Julius backward-based recognizers, and adding this automatically transcribed and selected data to the manually transcribed data leads to significant word error rate reductions on the ESTER2 data when compared to the baseline system trained only on manually transcribed speech data

    Analysis and Combination of Forward and Backward based Decoders for Improved Speech Transcription

    No full text
    International audienceThis paper analysis the behavior of forward and backward-based decoders used for speech transcription. Experiments have showed that backwardbased decoding leads to similar recognition performance as forward-based decoding, which is consistent with the fact that both systems handle similar information through the acoustic, lexical and language models. However, because of heuristics, search algorithms used in decoding explore only a limited portion of the search space. As forward-based and backward-based approaches do not process the speech signal in the same temporal way, they explore different portions of the search space; leading to complementary systems that can be efficiently combined using the ROVER approach. The speech transcription results achieved by combining forward-based and backward-based systems are significantly better than the results obtained by combining the same amount of forward-only or backward-only systems. This confirms the complementary of the forward and backward approaches and thus the usefulness of their combination
    corecore