7 research outputs found

    An Efficient Implementation of the Head-Corner Parser

    Get PDF
    This paper describes an efficient and robust implementation of a bi-directional, head-driven parser for constraint-based grammars. This parser is developed for the OVIS system: a Dutch spoken dialogue system in which information about public transport can be obtained by telephone. After a review of the motivation for head-driven parsing strategies, and head-corner parsing in particular, a non-deterministic version of the head-corner parser is presented. A memoization technique is applied to obtain a fast parser. A goal-weakening technique is introduced which greatly improves average case efficiency, both in terms of speed and space requirements. I argue in favor of such a memoization strategy with goal-weakening in comparison with ordinary chart-parsers because such a strategy can be applied selectively and therefore enormously reduces the space requirements of the parser, while no practical loss in time-efficiency is observed. On the contrary, experiments are described in which head-corner and left-corner parsers implemented with selective memoization and goal weakening outperform `standard' chart parsers. The experiments include the grammar of the OVIS system and the Alvey NL Tools grammar. Head-corner parsing is a mix of bottom-up and top-down processing. Certain approaches towards robust parsing require purely bottom-up processing. Therefore, it seems that head-corner parsing is unsuitable for such robust parsing techniques. However, it is shown how underspecification (which arises very naturally in a logic programming environment) can be used in the head-corner parser to allow such robust parsing techniques. A particular robust parsing model is described which is implemented in OVIS.Comment: 31 pages, uses cl.st

    Robust Grammatical Analysis for Spoken Dialogue Systems

    Full text link
    We argue that grammatical analysis is a viable alternative to concept spotting for processing spoken input in a practical spoken dialogue system. We discuss the structure of the grammar, and a model for robust parsing which combines linguistic sources of information and statistical sources of information. We discuss test results suggesting that grammatical processing allows fast and accurate processing of spoken input.Comment: Accepted for JNL

    Probabilistic Constraint Logic Programming

    Full text link
    This paper addresses two central problems for probabilistic processing models: parameter estimation from incomplete data and efficient retrieval of most probable analyses. These questions have been answered satisfactorily only for probabilistic regular and context-free models. We address these problems for a more expressive probabilistic constraint logic programming model. We present a log-linear probability model for probabilistic constraint logic programming. On top of this model we define an algorithm to estimate the parameters and to select the properties of log-linear models from incomplete data. This algorithm is an extension of the improved iterative scaling algorithm of Della-Pietra, Della-Pietra, and Lafferty (1995). Our algorithm applies to log-linear models in general and is accompanied with suitable approximation methods when applied to large data spaces. Furthermore, we present an approach for searching for most probable analyses of the probabilistic constraint logic programming model. This method can be applied to the ambiguity resolution problem in natural language processing applications.Comment: 35 pages, uses sfbart.cl

    Interleaving natural language parsing and generation through uniform processing

    Get PDF
    We present a new model of natural language processing in which natural language parsing and generation are strongly interleaved tasks. Interleaving of parsing and generation is important if we assume that natural language understanding and production are not only performed in isolation but also can work together to obtain subsentential interactions in text revision or dialog systems. The core of the model is a new uniform agenda-driven tabular algorithm, called UTA. Although uniformly defined, UTA is able to configure itself dynamically for either parsing or generation, because it is fully driven by the structure of the actual input - a string for parsing and a semantic expression for generation. Efficient interleaving of parsing and generation is obtained through item sharing between parsing and generation. This novel processing strategy facilitates exchanging items (i.e., partial results) computed in one direction automatically to the other direction as well. The advantage of UTA in combination with the item sharing method is that we are able to extend the use of memorization techniques even to the case of an interleaved approach. In order to demonstrate UTA's utility for developing high-level performance methods, we present a new algorithm for incremental self-monitoring during natural language production

    Interleaving natural language parsing and generation through uniform processing

    Get PDF
    We present a new model of natural language processing in which natural language parsing and generation are strongly interleaved tasks. Interleaving of parsing and generation is important if we assume that natural language understanding and production are not only performed in isolation but also can work together to obtain subsentential interactions in text revision or dialog systems. The core of the model is a new uniform agenda-driven tabular algorithm, called UTA. Although uniformly defined, UTA is able to configure itself dynamically for either parsing or generation, because it is fully driven by the structure of the actual input - a string for parsing and a semantic expression for generation. Efficient interleaving of parsing and generation is obtained through item sharing between parsing and generation. This novel processing strategy facilitates exchanging items (i.e., partial results) computed in one direction automatically to the other direction as well. The advantage of UTA in combination with the item sharing method is that we are able to extend the use of memorization techniques even to the case of an interleaved approach. In order to demonstrate UTA\u27s utility for developing high-level performance methods, we present a new algorithm for incremental self-monitoring during natural language production

    Darstellung und stochastische Auflösung von Ambiguität in constraint-basiertem Parsing

    Get PDF
    Diese Arbeit untersucht zwei komplementäre Ansätze zum Umgang mit Mehrdeutigkeiten bei der automatischen Verarbeitung natürlicher Sprache. Zunächst werden Methoden vorgestellt, die es erlauben, viele konkurrierende Interpretationen in einer gemeinsamen Datenstruktur kompakt zu repräsentieren. Dann werden Ansätze vorgeschlagen, die verschiedenen Interpretationen mit Hilfe von stochastischen Modellen zu bewerten. Für das dabei auftretende Problem, Wahrscheinlichkeiten von seltenen Ereignissen zu schätzen, die in den Trainingsdaten nicht auftraten, werden neuartige Methoden vorgeschlagen.This thesis investigates two complementary approches to cope with ambiguities in natural language processing. It first presents methods that allow to store many competing interpretations compactly in one shared datastructure. It then suggests approaches to score the different interpretations using stochastic models. This leads to the problem of estimation of probabilities of rare events that have not been observed in the training data, for which novel methods are proposed
    corecore