27 research outputs found

    Lessons from Building Acoustic Models with a Million Hours of Speech

    Full text link
    This is a report of our lessons learned building acoustic models from 1 Million hours of unlabeled speech, while labeled speech is restricted to 7,000 hours. We employ student/teacher training on unlabeled data, helping scale out target generation in comparison to confidence model based methods, which require a decoder and a confidence model. To optimize storage and to parallelize target generation, we store high valued logits from the teacher model. Introducing the notion of scheduled learning, we interleave learning on unlabeled and labeled data. To scale distributed training across a large number of GPUs, we use BMUF with 64 GPUs, while performing sequence training only on labeled data with gradient threshold compression SGD using 16 GPUs. Our experiments show that extremely large amounts of data are indeed useful; with little hyper-parameter tuning, we obtain relative WER improvements in the 10 to 20% range, with higher gains in noisier conditions.Comment: "Copyright 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

    Personalized Acoustic Modeling by Weakly Supervised Multi-Task Deep Learning using Acoustic Tokens Discovered from Unlabeled Data

    Full text link
    It is well known that recognizers personalized to each user are much more effective than user-independent recognizers. With the popularity of smartphones today, although it is not difficult to collect a large set of audio data for each user, it is difficult to transcribe it. However, it is now possible to automatically discover acoustic tokens from unlabeled personal data in an unsupervised way. We therefore propose a multi-task deep learning framework called a phoneme-token deep neural network (PTDNN), jointly trained from unsupervised acoustic tokens discovered from unlabeled data and very limited transcribed data for personalized acoustic modeling. We term this scenario "weakly supervised". The underlying intuition is that the high degree of similarity between the HMM states of acoustic token models and phoneme models may help them learn from each other in this multi-task learning framework. Initial experiments performed over a personalized audio data set recorded from Facebook posts demonstrated that very good improvements can be achieved in both frame accuracy and word accuracy over popularly-considered baselines such as fDLR, speaker code and lightly supervised adaptation. This approach complements existing speaker adaptation approaches and can be used jointly with such techniques to yield improved results.Comment: 5 pages, 5 figures, published in IEEE ICASSP 201

    About Combining Forward and Backward-Based Decoders for Selecting Data for Unsupervised Training of Acoustic Models

    Get PDF
    International audienceThis paper introduces the combination of speech decoders for selecting automatically transcribed speech data for unsupervised training or adaptation of acoustic models. Here, the combination relies on the use of a forward-based and a backward-based decoder. Best performance is achieved when selecting automatically transcribed data (speech segments) that have the same word hypotheses when processed by the Sphinx forward-based and the Julius backward-based transcription systems, and this selection process outperforms confidence measure based selection. Results are reported and discussed for adaptation and for full training from scratch, using data resulting from various selection processes, whether alone or in addition to the baseline manually transcribed data. Overall, selecting automatically transcribed speech segments that have the same word hypotheses when processed by the Sphinx forward-based and Julius backward-based recognizers, and adding this automatically transcribed and selected data to the manually transcribed data leads to significant word error rate reductions on the ESTER2 data when compared to the baseline system trained only on manually transcribed speech data

    Технология разметки звуковых файлов с использованием неточного текстового сопровождения

    Get PDF
    Описана технология разметки звуковых файлов с использованием неточного текстового сопровождения. Предварительно формируется система распознавания на основе речевых записей, размеченных экспертами. Новые речевые записи распознаются для выяснения временны́х границ слов. Процедура сравнения ответа распознавания и неточного описания выявляет фрагменты звука, для которых есть точное соответствие. На основе автоматически полученной разметки строится новая, более точная система автоматического многодикторного распознавания спонтанной украинской речи с объемом словаря в 125 тысяч словоформ. Проведенные эксперименты показали пословную точность распознавания в 80 %.Описано технологію розмітки звукових файлів з використанням неточного текстового супроводження. Заздалегідь формується система розпізнавання мовлення на базі мовленнєвих записів, розмічених експертами. Нові мовленнєві записи розпізнаються для з’ясування меж слів у часовому просторі. Процедура порівняння відповіді розпізнавання і неточного текстового опису виявляє фрагменти звуку, для яких є точний збіг текстового опису зі звуковим сигналом. На базі автоматично отриманої розмітки будується нова більш точна система автоматичного багатодикторного розпізнавання спонтанної української мови з обсягом словника в 125 тисяч словоформ. Наведені результати експериментів, які показали точність 80 % послівного розпізнавання.This paper describes the speech labeling technology using an inexact text description. Preliminary there was built the speech recognition system based on the manually labeled corpus. This system is used to recognize new voice records and to determine the words temporal boundaries. A comparison of the recognition response and inexact text description identifies the audio chunks, where there is an exact match. The new more accurate large vocabulary continuous speech recognition system for Ukrainian is build by using the automatically labeled corpus. This approach can be useful for automatic labeling of large amount of partially annotated audio signals, so that the significantly reducing the cost of developing speech recognition systems is achieved. Experimental results show the effectiveness of the approach and reduce errors in speech recognition by 24.8 % so that the accuracy of 80 % by word recognition is achieved for broadcasts

    Incorporación de n-gramas discriminativos para mejorar un reconocedor de idioma fonotáctico basado en i-vectores

    Get PDF
    Este artículo describe una nueva técnica que permite combinar la información de dos sistemas fonotácticos distintos con el objetivo de mejorar los resultados de un sistema de reconocimiento automático de idioma. El primer sistema se basa en la creación de cuentas de posteriorgramas utilizadas para la generación de i-vectores, y el segundo es una variante del primero que tiene en cuenta los n-gramas más discriminativos en función de su ocurrencia en un idioma frente a todos los demás. La técnica propuesta permite obtener una mejora relativa de 8.63% en Cavg sobre los datos de evaluación utilizados para la competición ALBAYZIN 2012 LRE
    corecore