Skip to main content
Article thumbnail
Location of Repository

Information state based speech recognition

By Rebecca Jonson

Abstract

One of the pitfalls in spoken dialogue systems is the brittleness of automatic speech recognition (ASR). ASR systems often misrecognize user input and they are unreliable when it comes to judging their own performance. Recognition failures and deficient confidence estimation affect the performance of a dialogue system as a whole and the impression it makes on a user. Humans outperform ASR systems on most tasks related to speech understanding. One of the reasons is that humans make use of much more knowledge. For example humans appear to take a variety of knowledge-based aspects of the current dialogue into account when processing speech. The main purpose of this thesis is to investigate whether speech recognition also can benefit from the use of higher level knowledge sources and dialogue context when used in spoken dialogue systems. One of the major contributions of this thesis is to provide more insight into what type of knowledge sources in spoken dialogue systems would be potential contributors to the task of ASR and how such knowledge can be represented computationally. In the framework of information state based dialogue management we have an important source of semantic and pragmatic knowledge represented in the information state. We will investigate if the knowledge in the information state can help to alleviate the search problem and reliability estimation in speech recognition. We call this knowledge and context aware approach to speech recognition information state based speech recognition. The first part of this thesis investigates approaches to obtaining better initial language models more rapidly for spoken dialogue systems and ways of dynamically selecting the most appropriate models based on the dialogue context. The second part of this thesis concerns the use of the speech recognition output and investigates how additional knowledge sources can enhance a dialogue system's decision-making on how to proceed and make use of speech recognition hypotheses. The thesis presents several experimental studies addressing the issues described above and proposes an integration of the explored techniques into the GoDiS dialogue system

Topics: dialogue systems, speech recognition, language modelling, dialogue move, dialogue context, ASR, higher level knowledge, linguistic knowledge, N-Best re-ranking, confidence scoring, confidence annotation, information state, ISU approach
Year: 2010
OAI identifier: oai:gupea.ub.gu.se:2077/22169

Suggested articles

Citations

  1. (2000a) Can artificial neural networks learn language models?,
  2. (2000b) Language modeling for dialog system, in
  3. (1989). High Level Knowledge Sources in Usable Speech Recognition Systems,
  4. (2002). Improve Latent Semantic Analysis based Language Model by Integrating Multiple Level Knowledge,
  5. (2001). Improving Trigram Language Modeling with the World Wide Web,
  6. (1996). Large Vocabulary Continuous Speech Recognition,
  7. (1993). Learning New Words From Spontaneous Speech,
  8. (1993). Recognition Confidence Measures for Spontaneous Spoken Dialog,
  9. (1989). The MINDS system: Using Context and Dialog to Enhance Speech Recognition,
  10. (1995). The role of higher-level semantic, pragmatic and discourse knowledge in recognizing and understanding new spoken words and phrases,
  11. (2001). Word Level Confidence Annotation using Combinations of Features,
  12. (2006). Ye-Yi Wang and Alex Acero

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.