24,569 research outputs found
Integrating Prosodic and Lexical Cues for Automatic Topic Segmentation
We present a probabilistic model that uses both prosodic and lexical cues for
the automatic segmentation of speech into topically coherent units. We propose
two methods for combining lexical and prosodic information using hidden Markov
models and decision trees. Lexical information is obtained from a speech
recognizer, and prosodic features are extracted automatically from speech
waveforms. We evaluate our approach on the Broadcast News corpus, using the
DARPA-TDT evaluation metrics. Results show that the prosodic model alone is
competitive with word-based segmentation methods. Furthermore, we achieve a
significant reduction in error by combining the prosodic and word-based
knowledge sources.Comment: 27 pages, 8 figure
Evaluating cross-linguistic forced alignment of conversational data in north Australian Kriol, an under-resourced language
Speech technology is transforming language documentation; acoustic models trained on âsmallâ languages are now technically feasible. At the same time, forced alignment built for major world languages has matured and now offers ease of use through web interfaces requiring low technical expertise. This paper provides an updated and detailed evaluation of cross-linguistic forced alignment, the approach of using forced aligners untrained on the target language. We compare two options within MAUS (Munich Automatic Segmentation System): language-independent mode vs major world language system (here, Italian) on the one dataset, a comparison that has not previously been reported. The dataset comes from a corpus of adult conversational speech in Kriol, an English-based creole of northern Australia. The results of using MAUS Italian were better than those of using the language-independent mode and those in previous studies: the agreement rate at 20 ms was 72.1% at vowel onset and 57.2% at vowel offset. With completely misaligned tokens excluded, the overall agreement rate rose to 69.2% at 20 ms and over 90% at 50 ms. Most errors in the output SAMPA (Speech Assessment Methods Phonetic Alphabet) labels were resolvable with simple text replacements. These results offer updated benchmark data for an untrained, late-model forced alignment system.National Foreign Language Resource Cente
Prosody-Based Automatic Segmentation of Speech into Sentences and Topics
A crucial step in processing speech audio data for information extraction,
topic detection, or browsing/playback is to segment the input into sentence and
topic units. Speech segmentation is challenging, since the cues typically
present for segmenting text (headers, paragraphs, punctuation) are absent in
spoken language. We investigate the use of prosody (information gleaned from
the timing and melody of speech) for these tasks. Using decision tree and
hidden Markov modeling techniques, we combine prosodic cues with word-based
approaches, and evaluate performance on two speech corpora, Broadcast News and
Switchboard. Results show that the prosodic model alone performs on par with,
or better than, word-based statistical language models -- for both true and
automatically recognized words in news speech. The prosodic model achieves
comparable performance with significantly less training data, and requires no
hand-labeling of prosodic events. Across tasks and corpora, we obtain a
significant improvement over word-only models using a probabilistic combination
of prosodic and lexical information. Inspection reveals that the prosodic
models capture language-independent boundary indicators described in the
literature. Finally, cue usage is task and corpus dependent. For example, pause
and pitch features are highly informative for segmenting news speech, whereas
pause, duration and word-based cues dominate for natural conversation.Comment: 30 pages, 9 figures. To appear in Speech Communication 32(1-2),
Special Issue on Accessing Information in Spoken Audio, September 200
The 2005 AMI system for the transcription of speech in meetings
In this paper we describe the 2005 AMI system for the transcription\ud
of speech in meetings used for participation in the 2005 NIST\ud
RT evaluations. The system was designed for participation in the speech\ud
to text part of the evaluations, in particular for transcription of speech\ud
recorded with multiple distant microphones and independent headset\ud
microphones. System performance was tested on both conference room\ud
and lecture style meetings. Although input sources are processed using\ud
different front-ends, the recognition process is based on a unified system\ud
architecture. The system operates in multiple passes and makes use\ud
of state of the art technologies such as discriminative training, vocal\ud
tract length normalisation, heteroscedastic linear discriminant analysis,\ud
speaker adaptation with maximum likelihood linear regression and minimum\ud
word error rate decoding. In this paper we describe the system performance\ud
on the official development and test sets for the NIST RT05s\ud
evaluations. The system was jointly developed in less than 10 months\ud
by a multi-site team and was shown to achieve very competitive performance
- âŠ