69,742 research outputs found
Analyzing and Improving Statistical Language Models for Speech Recognition
In many current speech recognizers, a statistical language model is used to
indicate how likely it is that a certain word will be spoken next, given the
words recognized so far. How can statistical language models be improved so
that more complex speech recognition tasks can be tackled? Since the knowledge
of the weaknesses of any theory often makes improving the theory easier, the
central idea of this thesis is to analyze the weaknesses of existing
statistical language models in order to subsequently improve them. To that end,
we formally define a weakness of a statistical language model in terms of the
logarithm of the total probability, LTP, a term closely related to the standard
perplexity measure used to evaluate statistical language models. We apply our
definition of a weakness to a frequently used statistical language model,
called a bi-pos model. This results, for example, in a new modeling of unknown
words which improves the performance of the model by 14% to 21%. Moreover, one
of the identified weaknesses has prompted the development of our generalized
N-pos language model, which is also outlined in this thesis. It can incorporate
linguistic knowledge even if it extends over many words and this is not
feasible in a traditional N-pos model. This leads to a discussion of
whatknowledge should be added to statistical language models in general and we
give criteria for selecting potentially useful knowledge. These results show
the usefulness of both our definition of a weakness and of performing an
analysis of weaknesses of statistical language models in general.Comment: 140 pages, postscript, approx 500KB, if problems with delivery, mail
to [email protected]
Processing and Linking Audio Events in Large Multimedia Archives: The EU inEvent Project
In the inEvent EU project [1], we aim at structuring, retrieving, and sharing large archives of networked, and dynamically changing, multimedia recordings, mainly consisting of meetings, videoconferences, and lectures. More specifically, we are developing an integrated system that performs audiovisual processing of multimedia recordings, and labels them in terms of interconnected “hyper-events ” (a notion inspired from hyper-texts). Each hyper-event is composed of simpler facets, including audio-video recordings and metadata, which are then easier to search, retrieve and share. In the present paper, we mainly cover the audio processing aspects of the system, including speech recognition, speaker diarization and linking (across recordings), the use of these features for hyper-event indexing and recommendation, and the search portal. We present initial results for feature extraction from lecture recordings using the TED talks. Index Terms: Networked multimedia events; audio processing: speech recognition; speaker diarization and linking; multimedia indexing and searching; hyper-events. 1
Automatic Quality Estimation for ASR System Combination
Recognizer Output Voting Error Reduction (ROVER) has been widely used for
system combination in automatic speech recognition (ASR). In order to select
the most appropriate words to insert at each position in the output
transcriptions, some ROVER extensions rely on critical information such as
confidence scores and other ASR decoder features. This information, which is
not always available, highly depends on the decoding process and sometimes
tends to over estimate the real quality of the recognized words. In this paper
we propose a novel variant of ROVER that takes advantage of ASR quality
estimation (QE) for ranking the transcriptions at "segment level" instead of:
i) relying on confidence scores, or ii) feeding ROVER with randomly ordered
hypotheses. We first introduce an effective set of features to compensate for
the absence of ASR decoder information. Then, we apply QE techniques to perform
accurate hypothesis ranking at segment-level before starting the fusion
process. The evaluation is carried out on two different tasks, in which we
respectively combine hypotheses coming from independent ASR systems and
multi-microphone recordings. In both tasks, it is assumed that the ASR decoder
information is not available. The proposed approach significantly outperforms
standard ROVER and it is competitive with two strong oracles that e xploit
prior knowledge about the real quality of the hypotheses to be combined.
Compared to standard ROVER, the abs olute WER improvements in the two
evaluation scenarios range from 0.5% to 7.3%
On the Place of Text Data in Lifelogs, and Text Analysis via Semantic Facets
Current research in lifelog data has not paid enough attention to analysis of
cognitive activities in comparison to physical activities. We argue that as we
look into the future, wearable devices are going to be cheaper and more
prevalent and textual data will play a more significant role. Data captured by
lifelogging devices will increasingly include speech and text, potentially
useful in analysis of intellectual activities. Analyzing what a person hears,
reads, and sees, we should be able to measure the extent of cognitive activity
devoted to a certain topic or subject by a learner. Test-based lifelog records
can benefit from semantic analysis tools developed for natural language
processing. We show how semantic analysis of such text data can be achieved
through the use of taxonomic subject facets and how these facets might be
useful in quantifying cognitive activity devoted to various topics in a
person's day. We are currently developing a method to automatically create
taxonomic topic vocabularies that can be applied to this detection of
intellectual activity
- …