13,197 research outputs found

    Out-of-domain Detection for Natural Language Understanding in Dialog Systems

    Full text link
    Natural Language Understanding (NLU) is a vital component of dialogue systems, and its ability to detect Out-of-Domain (OOD) inputs is critical in practical applications, since the acceptance of the OOD input that is unsupported by the current system may lead to catastrophic failure. However, most existing OOD detection methods rely heavily on manually labeled OOD samples and cannot take full advantage of unlabeled data. This limits the feasibility of these models in practical applications. In this paper, we propose a novel model to generate high-quality pseudo OOD samples that are akin to IN-Domain (IND) input utterances, and thereby improves the performance of OOD detection. To this end, an autoencoder is trained to map an input utterance into a latent code. and the codes of IND and OOD samples are trained to be indistinguishable by utilizing a generative adversarial network. To provide more supervision signals, an auxiliary classifier is introduced to regularize the generated OOD samples to have indistinguishable intent labels. Experiments show that these pseudo OOD samples generated by our model can be used to effectively improve OOD detection in NLU. Besides, we also demonstrate that the effectiveness of these pseudo OOD data can be further improved by efficiently utilizing unlabeled data.Comment: Accepted by TALS

    The use of belief networks in natural language understanding and dialog modeling.

    Get PDF
    Wai, Chi Man Carmen.Thesis (M.Phil.)--Chinese University of Hong Kong, 2001.Includes bibliographical references (leaves 129-136).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Overview --- p.1Chapter 1.2 --- Natural Language Understanding --- p.3Chapter 1.3 --- BNs for Handling Speech Recognition Errors --- p.4Chapter 1.4 --- BNs for Dialog Modeling --- p.5Chapter 1.5 --- Thesis Goals --- p.8Chapter 1.6 --- Thesis Outline --- p.8Chapter 2 --- Background --- p.10Chapter 2.1 --- Natural Language Understanding --- p.11Chapter 2.1.1 --- Rule-based Approaches --- p.12Chapter 2.1.2 --- Stochastic Approaches --- p.13Chapter 2.1.3 --- Phrase-Spotting Approaches --- p.16Chapter 2.2 --- Handling Recognition Errors in Spoken Queries --- p.17Chapter 2.3 --- Spoken Dialog Systems --- p.19Chapter 2.3.1 --- Finite-State Networks --- p.21Chapter 2.3.2 --- The Form-based Approaches --- p.21Chapter 2.3.3 --- Sequential Decision Approaches --- p.22Chapter 2.3.4 --- Machine Learning Approaches --- p.24Chapter 2.4 --- Belief Networks --- p.27Chapter 2.4.1 --- Introduction --- p.27Chapter 2.4.2 --- Bayesian Inference --- p.29Chapter 2.4.3 --- Applications of the Belief Networks --- p.32Chapter 2.5 --- Chapter Summary --- p.33Chapter 3 --- Belief Networks for Natural Language Understanding --- p.34Chapter 3.1 --- The ATIS Domain --- p.35Chapter 3.2 --- Problem Formulation --- p.36Chapter 3.3 --- Semantic Tagging --- p.37Chapter 3.4 --- Belief Networks Development --- p.38Chapter 3.4.1 --- Concept Selection --- p.39Chapter 3.4.2 --- Bayesian Inferencing --- p.40Chapter 3.4.3 --- Thresholding --- p.40Chapter 3.4.4 --- Goal Identification --- p.41Chapter 3.5 --- Experiments on Natural Language Understanding --- p.42Chapter 3.5.1 --- Comparison between Mutual Information and Informa- tion Gain --- p.42Chapter 3.5.2 --- Varying the Input Dimensionality --- p.44Chapter 3.5.3 --- Multiple Goals and Rejection --- p.46Chapter 3.5.4 --- Comparing Grammars --- p.47Chapter 3.6 --- Benchmark with Decision Trees --- p.48Chapter 3.7 --- Performance on Natural Language Understanding --- p.51Chapter 3.8 --- Handling Speech Recognition Errors in Spoken Queries --- p.52Chapter 3.8.1 --- Corpus Preparation --- p.53Chapter 3.8.2 --- Enhanced Belief Network Topology --- p.54Chapter 3.8.3 --- BNs for Handling Speech Recognition Errors --- p.55Chapter 3.8.4 --- Experiments on Handling Speech Recognition Errors --- p.60Chapter 3.8.5 --- Significance Testing --- p.64Chapter 3.8.6 --- Error Analysis --- p.65Chapter 3.9 --- Chapter Summary --- p.67Chapter 4 --- Belief Networks for Mixed-Initiative Dialog Modeling --- p.68Chapter 4.1 --- The CU FOREX Domain --- p.69Chapter 4.1.1 --- Domain-Specific Constraints --- p.69Chapter 4.1.2 --- Two Interaction Modalities --- p.70Chapter 4.2 --- The Belief Networks --- p.70Chapter 4.2.1 --- Informational Goal Inference --- p.72Chapter 4.2.2 --- Detection of Missing / Spurious Concepts --- p.74Chapter 4.3 --- Integrating Two Interaction Modalities --- p.78Chapter 4.4 --- Incorporating Out-of-Vocabulary Words --- p.80Chapter 4.4.1 --- Natural Language Queries --- p.80Chapter 4.4.2 --- Directed Queries --- p.82Chapter 4.5 --- Evaluation of the BN-based Dialog Model --- p.84Chapter 4.6 --- Chapter Summary --- p.87Chapter 5 --- Scalability and Portability of Belief Network-based Dialog Model --- p.88Chapter 5.1 --- Migration to the ATIS Domain --- p.89Chapter 5.2 --- Scalability of the BN-based Dialog Model --- p.90Chapter 5.2.1 --- Informational Goal Inference --- p.90Chapter 5.2.2 --- Detection of Missing / Spurious Concepts --- p.92Chapter 5.2.3 --- Context Inheritance --- p.94Chapter 5.3 --- Portability of the BN-based Dialog Model --- p.101Chapter 5.3.1 --- General Principles for Probability Assignment --- p.101Chapter 5.3.2 --- Performance of the BN-based Dialog Model with Hand- Assigned Probabilities --- p.105Chapter 5.3.3 --- Error Analysis --- p.108Chapter 5.4 --- Enhancements for Discourse Query Understanding --- p.110Chapter 5.4.1 --- Combining Trained and Handcrafted Probabilities --- p.110Chapter 5.4.2 --- Handcrafted Topology for BNs --- p.111Chapter 5.4.3 --- Performance of the Enhanced BN-based Dialog Model --- p.117Chapter 5.5 --- Chapter Summary --- p.120Chapter 6 --- Conclusions --- p.122Chapter 6.1 --- Summary --- p.122Chapter 6.2 --- Contributions --- p.126Chapter 6.3 --- Future Work --- p.127Bibliography --- p.129Chapter A --- The Two Original SQL Query --- p.137Chapter B --- "The Two Grammars, GH and GsA" --- p.139Chapter C --- Probability Propagation in Belief Networks --- p.149Chapter C.1 --- Computing the aposteriori probability of P*(G) based on in- put concepts --- p.151Chapter C.2 --- Computing the aposteriori probability of P*(Cj) by backward inference --- p.154Chapter D --- Total 23 Concepts for the Handcrafted BN --- p.15

    Combining Expression and Content in Domains for Dialog Managers

    Full text link
    We present work in progress on abstracting dialog managers from their domain in order to implement a dialog manager development tool which takes (among other data) a domain description as input and delivers a new dialog manager for the described domain as output. Thereby we will focus on two topics; firstly, the construction of domain descriptions with description logics and secondly, the interpretation of utterances in a given domain.Comment: 5 pages, uses conference.st

    SCREEN: Learning a Flat Syntactic and Semantic Spoken Language Analysis Using Artificial Neural Networks

    Get PDF
    In this paper, we describe a so-called screening approach for learning robust processing of spontaneously spoken language. A screening approach is a flat analysis which uses shallow sequences of category representations for analyzing an utterance at various syntactic, semantic and dialog levels. Rather than using a deeply structured symbolic analysis, we use a flat connectionist analysis. This screening approach aims at supporting speech and language processing by using (1) data-driven learning and (2) robustness of connectionist networks. In order to test this approach, we have developed the SCREEN system which is based on this new robust, learned and flat analysis. In this paper, we focus on a detailed description of SCREEN's architecture, the flat syntactic and semantic analysis, the interaction with a speech recognizer, and a detailed evaluation analysis of the robustness under the influence of noisy or incomplete input. The main result of this paper is that flat representations allow more robust processing of spontaneous spoken language than deeply structured representations. In particular, we show how the fault-tolerance and learning capability of connectionist networks can support a flat analysis for providing more robust spoken-language processing within an overall hybrid symbolic/connectionist framework.Comment: 51 pages, Postscript. To be published in Journal of Artificial Intelligence Research 6(1), 199
    • …
    corecore