32 research outputs found

    Multilingual Deep Bottle Neck Features: A Study on Language Selection and Training Techniques

    Get PDF
    Previous work has shown that training the neural networks for bottle neck feature extraction in a multilingual way can lead to improvements in word error rate and average term weighted value in a telephone key word search task. In this work we conduct a systematic study on a) which multilingual training strategy to employ, b) the effect of language selection and amount of multilingual training data used and c) how to find a suitable combination for languages. We conducted our experiment on the key word search task and the languages of the IARPA BABEL program. In a first step, we assessed the performance of a single language out of all available languages in combination with the target language. Based on these results, we then combined a multitude of languages. We also examined the influence of the amount of training data per language, as well as different techniques for combining the languages during network training. Our experiments show that data from arbitrary additional languages does not necessarily increase the performance of a system. But when combining a suitable set of languages, a significant gain in performance can be achieved

    Semi-Supervised Speech Emotion Recognition with Ladder Networks

    Full text link
    Speech emotion recognition (SER) systems find applications in various fields such as healthcare, education, and security and defense. A major drawback of these systems is their lack of generalization across different conditions. This problem can be solved by training models on large amounts of labeled data from the target domain, which is expensive and time-consuming. Another approach is to increase the generalization of the models. An effective way to achieve this goal is by regularizing the models through multitask learning (MTL), where auxiliary tasks are learned along with the primary task. These methods often require the use of labeled data which is computationally expensive to collect for emotion recognition (gender, speaker identity, age or other emotional descriptors). This study proposes the use of ladder networks for emotion recognition, which utilizes an unsupervised auxiliary task. The primary task is a regression problem to predict emotional attributes. The auxiliary task is the reconstruction of intermediate feature representations using a denoising autoencoder. This auxiliary task does not require labels so it is possible to train the framework in a semi-supervised fashion with abundant unlabeled data from the target domain. This study shows that the proposed approach creates a powerful framework for SER, achieving superior performance than fully supervised single-task learning (STL) and MTL baselines. The approach is implemented with several acoustic features, showing that ladder networks generalize significantly better in cross-corpus settings. Compared to the STL baselines, the proposed approach achieves relative gains in concordance correlation coefficient (CCC) between 3.0% and 3.5% for within corpus evaluations, and between 16.1% and 74.1% for cross corpus evaluations, highlighting the power of the architecture

    Evaluating Novel Speech Transcription Architectures on the Spanish RTVE2020 Database

    Get PDF
    This work presents three novel speech recognition architectures evaluated on the Spanish RTVE2020 dataset, employed as the main evaluation set in the Albayzín S2T Transcription Challenge 2020. The main objective was to improve the performance of the systems previously submitted by the authors to the challenge, in which the primary system scored the second position. The novel systems are based on both DNN-HMM and E2E acoustic models, for which fully-and self-supervised learning methods were included. As a result, the new speech recognition engines clearly outper-formed the performance of the initial systems from the previous best WER of 19.27 to the new best of 17.60 achieved by the DNN-HMM based system. This work therefore describes an interesting benchmark of the latest acoustic models over a highly challenging dataset, and identifies the most optimal ones depending on the expected quality, the available resources and the required latency

    Influence of Morphological Features on Language Modeling With Neural Networks in Speech Recognition Systems

    Get PDF
    Automatsko prepoznavanje govora je tehnologija koja računarima omogućava pretvaranje izgovorenih reči u tekst. Ona se može primeniti u mnogim savremenim sistemima koji uključuju komunikaciju između čoveka i mašine. U ovoj disertaciji detaljno je opisana jedna od dve glavne komponente sistema za prepoznavanje govora, a to je jezički model, koji specificira rečnik sistema, kao i pravila prema kojim se pojedinačne reči mogu povezati u rečenicu. Srpski jezik spada u grupu visoko inflektivnih i morfološki bogatih jezika, što znači da koristi veći broj različitih završetaka reči za izražavanje željene gramatičke, sintaksičke ili semantičke funkcije date reči. Ovakvo ponašanje često dovodi do velikog broja grešaka sistema za prepoznavanje govora kod kojih zbog dobrog akustičkog poklapanja prepoznavač pogodi osnovni oblik reči, ali pogreši njen završetak. Taj završetak može da označava drugu morfološku kategoriju, na primer, padež, rod ili broj. U radu je predstavljen novi alat za modelovanje jezika, koji uz identitet reči u modelu može da koristi dodatna leksička i morfološka obeležja reči, čime je testirana hipoteza da te dodatne informacije mogu pomoći u prevazilaženju značajnog broja grešaka prepoznavača koje su posledica inflektivnosti srpskog jezika.Automatic speech recognition is a technology that allows computers to convert spoken words into text. It can be applied in various areas which involve communication between humans and machines. This thesis primarily deals with one of two main components of speech recognition systems - the language model, that specifies the vocabulary of the system, as well as the rules by which individual words can be linked into sentences. The Serbian language belongs to a group of highly inflective and morphologically rich languages, which means that it uses a number of different word endings to express the desired grammatical, syntactic, or semantic function of the given word. Such behavior often leads to a significant number of errors in speech recognition systems where due to good acoustic matching the recognizer correctly guesses the basic form of the word, but an error occurs in the word ending. This word ending may indicate a different morphological category, for example, word case, grammatical gender, or grammatical number. The thesis presents a new language modeling tool which, along with the word identity, can also model additional lexical and morphological features of the word, thus testing the hypothesis that this additional information can help overcome a significant number of recognition errors that result from the high inflectivity of the Serbian language
    corecore