6 research outputs found
Advances in Interactive Speech Transcription
[ES] Novedoso sistema interactivo para la transcripción del habla que compensa el esfuerzo del usuario y el error máximo tolerado en las transcripciones resultantes.[EN] Novel interactive speech transcription system that balances the user effort and the maximum allowed error tolerated for the final resulting transcriptions.Sánchez Cortina, I. (2012). Advances in Interactive Speech Transcription. http://hdl.handle.net/10251/17889Archivo delegad
Learning from disagreement: a survey
Many tasks in Natural Language Processing (nlp) and Computer Vision (cv) offer evidence that humans disagree, from objective tasks such as part-of-speech tagging to more subjective tasks such as classifying an image or deciding whether a proposition follows from certain premises. While most learning in artificial intelligence (ai) still relies on the assumption that a single (gold) interpretation exists for each item, a growing body of research aims to develop learning methods that do not rely on this assumption. In this survey, we review the evidence for disagreements on nlp and cv tasks, focusing on tasks for which substantial datasets containing this information have been created. We discuss the most popular approaches to training models from datasets containing multiple judgments potentially in disagreement. We systematically compare these different approaches by training them with each of the available datasets, considering several ways to evaluate the resulting models. Finally, we discuss the results in depth, focusing on four key research questions, and assess how the type of evaluation and the characteristics of a dataset determine the answers to these questions. Our results suggest, first of all, that even if we abandon the assumption of a gold standard, it is still essential to reach a consensus on how to evaluate models. This is because the relative performance of the various training methods is critically affected by the chosen form of evaluation. Secondly, we observed a strong dataset effect. With substantial datasets, providing many judgments by high-quality coders for each item, training directly with soft labels achieved better results than training from aggregated or even gold labels. This result holds for both hard and soft evaluation. But when the above conditions do not hold, leveraging both gold and soft labels generally achieved the best results in the hard evaluation. All datasets and models employed in this paper are freely available as supplementary materials
Attelage de systèmes de transcription automatique de la parole
Nous abordons, dans cette thèse, les méthodes de combinaison de systèmesde transcription de la parole à Large Vocabulaire. Notre étude se concentre surl attelage de systèmes de transcription hétérogènes dans l objectif d améliorerla qualité de la transcription à latence contrainte. Les systèmes statistiquessont affectés par les nombreuses variabilités qui caractérisent le signal dela parole. Un seul système n est généralement pas capable de modéliserl ensemble de ces variabilités. La combinaison de différents systèmes detranscription repose sur l idée d exploiter les points forts de chacun pourobtenir une transcription finale améliorée. Les méthodes de combinaisonproposées dans la littérature sont majoritairement appliquées a posteriori,dans une architecture de transcription multi-passes. Cela nécessite un tempsde latence considérable induit par le temps d attente requis avant l applicationde la combinaison.Récemment, une méthode de combinaison intégrée a été proposée. Cetteméthode est basée sur le paradigme de décodage guidé (DDA :Driven DecodingAlgorithm) qui permet de combiner différents systèmes durant le décodage. Laméthode consiste à intégrer des informations en provenance de plusieurs systèmes dits auxiliaires dans le processus de décodage d un système dit primaire.Notre contribution dans le cadre de cette thèse porte sur un double aspect : d une part, nous proposons une étude sur la robustesse de la combinaison par décodage guidé. Nous proposons ensuite, une amélioration efficacement généralisable basée sur le décodage guidé par sac de n-grammes,appelé BONG. D autre part, nous proposons un cadre permettant l attelagede plusieurs systèmes mono-passe pour la construction collaborative, à latenceréduite, de la sortie de l hypothèse de reconnaissance finale. Nous présentonsdifférents modèles théoriques de l architecture d attelage et nous exposons unexemple d implémentation en utilisant une architecture client/serveur distribuée. Après la définition de l architecture de collaboration, nous nous focalisons sur les méthodes de combinaison adaptées à la transcription automatiqueà latence réduite. Nous proposons une adaptation de la combinaison BONGpermettant la collaboration, à latence réduite, de plusieurs systèmes mono-passe fonctionnant en parallèle. Nous présentons également, une adaptationde la combinaison ROVER applicable durant le processus de décodage via unprocessus d alignement local suivi par un processus de vote basé sur la fréquence d apparition des mots. Les deux méthodes de combinaison proposéespermettent la réduction de la latence de la combinaison de plusieurs systèmesmono-passe avec un gain significatif du WER.This thesis presents work in the area of Large Vocabulary ContinuousSpeech Recognition (LVCSR) system combination. The thesis focuses onmethods for harnessing heterogeneous systems in order to increase theefficiency of speech recognizer with reduced latency.Automatic Speech Recognition (ASR) is affected by many variabilitiespresent in the speech signal, therefore single ASR systems are usually unableto deal with all these variabilities. Considering these limitations, combinationmethods are proposed as alternative strategies to improve recognitionaccuracy using multiple recognizers developed at different research siteswith different recognition strategies. System combination techniques areusually used within multi-passes ASR architecture. Outputs of two or moreASR systems are combined to estimate the most likely hypothesis amongconflicting word pairs or differing hypotheses for the same part of utterance.The contribution of this thesis is twofold. First, we study and analyze theintegrated driven decoding combination method which consists in guidingthe search algorithm of a primary ASR system by the one-best hypothesesof auxiliary systems. Thus we propose some improvements in order to makethe driven decoding more efficient and generalizable. The proposed methodis called BONG and consists in using Bag Of N-Gram auxiliary hypothesisfor the driven decoding.Second, we propose a new framework for low latency paralyzed single-passspeech recognizer harnessing. We study various theoretical harnessingmodels and we present an example of harnessing implementation basedon client/server distributed architecture. Afterwards, we suggest differentcombination methods adapted to the presented harnessing architecture:first we extend the BONG combination method for low latency paralyzedsingle-pass speech recognizer systems collaboration. Then we propose, anadaptation of the ROVER combination method to be performed during thedecoding process using a local vote procedure followed by voting based onword frequencies.LE MANS-BU Sciences (721812109) / SudocSudocFranceF
Measuring, refining and calibrating speaker and language information extracted from speech
Thesis (PhD (Electrical and Electronic Engineering))--University of Stellenbosch, 2010.ENGLISH ABSTRACT: We propose a new methodology, based on proper scoring rules, for the evaluation
of the goodness of pattern recognizers with probabilistic outputs. The
recognizers of interest take an input, known to belong to one of a discrete set
of classes, and output a calibrated likelihood for each class. This is a generalization
of the traditional use of proper scoring rules to evaluate the goodness
of probability distributions. A recognizer with outputs in well-calibrated probability
distribution form can be applied to make cost-effective Bayes decisions
over a range of applications, having di fferent cost functions. A recognizer
with likelihood output can additionally be employed for a wide range of prior
distributions for the to-be-recognized classes.
We use automatic speaker recognition and automatic spoken language
recognition as prototypes of this type of pattern recognizer. The traditional
evaluation methods in these fields, as represented by the series of NIST Speaker
and Language Recognition Evaluations, evaluate hard decisions made by the
recognizers. This makes these recognizers cost-and-prior-dependent. The proposed
methodology generalizes that of the NIST evaluations, allowing for the
evaluation of recognizers which are intended to be usefully applied over a wide
range of applications, having variable priors and costs.
The proposal includes a family of evaluation criteria, where each member
of the family is formed by a proper scoring rule. We emphasize two members
of this family: (i) A non-strict scoring rule, directly representing error-rate
at a given prior. (ii) The strict logarithmic scoring rule which represents
information content, or which equivalently represents summarized error-rate,
or expected cost, over a wide range of applications.
We further show how to form a family of secondary evaluation criteria,
which by contrasting with the primary criteria, form an analysis of the goodness
of calibration of the recognizers likelihoods.
Finally, we show how to use the logarithmic scoring rule as an objective
function for the discriminative training of fusion and calibration of speaker
and language recognizers.AFRIKAANSE OPSOMMING: Ons wys hoe om die onsekerheid in die uittree van outomatiese
sprekerherkenning- en taalherkenningstelsels voor te stel, te meet, te kalibreer
en te optimeer. Dit maak die bestaande tegnologie akkurater, doeltre ender
en meer algemeen toepasbaar
Comparison and Combination of Confidence Measures
A set of features for word-level confidence estimation is developed. The features should be easy to implement and should require no additional knowledge beyond the information which is available from the speech recognizer and the training data. We compare a number of features based on a common scoring method, the normalized cross entropy. We also study different ways to combine the features. An artificial neural network leads to the best performance, and a recognition rate of 76% is achieved. The approach is extended not only to detect recognition errors but also to distinguish between insertion and substitution errors