38 research outputs found

    Разновидности Π³Π»ΡƒΠ±ΠΎΠΊΠΈΡ… искусствСнных Π½Π΅ΠΉΡ€ΠΎΠ½Π½Ρ‹Ρ… сСтСй для систСм распознавания Ρ€Π΅Ρ‡ΠΈ

    Get PDF
    This paper presents a survey of basic methods for acoustic and language model development based on artificial neural networks for automatic speech recognition systems. The hybrid and tandem approaches for combination of Hidden Markov Models and artificial neural networks for acoustic modelling are given. The creation of language models using feedforward and recurrent neural networks is described. The survey of researches, conducted in this field, shows that application of artificial neural networks at the stages of both acoustic and language modeling allows decreasing word error rate.Π’ ΡΡ‚Π°Ρ‚ΡŒΠ΅ прСдставлСн аналитичСский ΠΎΠ±Π·ΠΎΡ€ основных разновидностСй акустичСских ΠΈ языковых ΠΌΠΎΠ΄Π΅Π»Π΅ΠΉ Π½Π° основС искусствСнных Π½Π΅ΠΉΡ€ΠΎΠ½Π½Ρ‹Ρ… сСтСй для систСм автоматичСского распознавания Ρ€Π΅Ρ‡ΠΈ. РассмотрСны Π³ΠΈΠ±Ρ€ΠΈΠ΄Π½Ρ‹ΠΉ ΠΈ Ρ‚Π°Π½Π΄Π΅ΠΌΠ½Ρ‹ΠΉ ΠΏΠΎΠ΄-Ρ…ΠΎΠ΄Ρ‹ объСдинСния скрытых марковских ΠΌΠΎΠ΄Π΅Π»Π΅ΠΉ ΠΈ искусствСнных Π½Π΅ΠΉΡ€ΠΎΠ½Π½Ρ‹Ρ… сСтСй для акустичСского модСлирования, описано построСниС языковых ΠΌΠΎΠ΄Π΅Π»Π΅ΠΉ с ΠΏΡ€ΠΈΠΌΠ΅Π½Π΅Π½ΠΈΠ΅ΠΌ сСтСй прямого распространСния ΠΈ Ρ€Π΅ΠΊΡƒΡ€Ρ€Π΅Π½Ρ‚Π½Ρ‹Ρ… нСйросСтСй. ΠžΠ±Π·ΠΎΡ€ исслСдований Π² Π΄Π°Π½Π½ΠΎΠΉ области ΠΏΠΎΠΊΠ°Π·Ρ‹Π²Π°Π΅Ρ‚, Ρ‡Ρ‚ΠΎ ΠΏΡ€ΠΈΠΌΠ΅Π½Π΅Π½ΠΈΠ΅ искусствСнных Π½Π΅ΠΉΡ€ΠΎΠ½Π½Ρ‹Ρ… сСтСй ΠΊΠ°ΠΊ Π½Π° этапС акустичСского, Ρ‚Π°ΠΊ ΠΈ Π½Π° этапС языкового модСлирования позволяСт ΡΠ½ΠΈΠ·ΠΈΡ‚ΡŒ ΠΎΡˆΠΈΠ±ΠΊΡƒ распознавания слов

    Improving bottleneck features for Vietnamese large vocabulary continuous speech recognition system using deep neural networks

    Get PDF
    In this paper, the pre-training method based on denoising auto-encoder is investigated and proved to be good models for initializing bottleneck networks of Vietnamese speech recognition system that result in better recognition performance compared to base bottleneck features reported previously. The experiments are carried out on the dataset containing speeches on Voice of Vietnam channel (VOV). The results show that the DBNF extraction for Vietnamese recognition decreases relative word error rate by 14 % and 39 % compared to the base bottleneck features and MFCC baseline, respectively

    Real-Time ASR from Meetings

    Get PDF
    The AMI(DA) system is a meeting room speech recognition system that has been developed and evaluated in the context of the NIST Rich Text (RT) evaluations. Recently, the ``Distant Access'' requirements of the AMIDA project have necessitated that the system operate in real-time. Another more difficult requirement is that the system fit into a live meeting transcription scenario. We describe an infrastructure that has allowed the AMI(DA) system to evolve into one that fulfils these extra requirements. We emphasise the components that address the live and real-time aspects

    FEATURE AND SCORE LEVEL COMBINATION OF SUBSPACE GAUSSIANS IN LVCSR TASK

    Get PDF
    In this paper, we investigate employment of discriminatively trained acoustic features modeled by Subspace Gaussian Mixture Models (SGMMs) for Rich Transcription meeting recognition. More specifically, first, we focus on exploiting various types of complex features estimated using neural network combined with conventional cepstral features and modeled by standard HMM/GMMs and SGMMs. Then, outputs (word sequences) from individual recognizers trained using different features are also combined on a score-level using ROVER for the both acoustic modeling techniques. Experimental results indicate three important findings: (1) SGMMs consistently outperform HMM/GMMs (relative improvement on average by about 6% in terms of WER) when both techniques are exploited on single features; (2) SGMMs benefit much less from feature-level combination (1% relative improvement) as opposed to HMM/GMMs (4% relative improvement) which can eventually match the performance of SGMMs; (3) SGMMs can be significantly improved when individual systems are combined on a score-level. This suggests that the SGMM systems provide complementary recognition outputs. Overall relative improvements of the combined SGMM and HMM/GMM systems are 21% and 17% respectively compared to a standard ASR baseline
    corecore