565 research outputs found

    SLR inference: An inference system for fixed-mode logic programs, based on SLR parsing

    Get PDF
    AbstractDefinite-clause grammars (DCGs) generalize context-free grammars in such a way that Prolog can be used as a parser in the presence of context-sensitive information. Prolog's proof procedure, however, is based on backtracking, which may be a source of inefficiency. Parsers for context-free grammars that use backtracking, for instance, were soon replaced by more efficient methods, such as LR parsers. This suggests incorporating the principles underlying LR parsing into a parser for grammars with context-sensitive information. We present a technique that applies a transformation to the program/grammar by adding leaves to the proof/parse trees and placing the contextual information in such leaves. An inference system is then easily obtained from an LR parser, since only the parts dealing with terminals (which appear at the leaves) must be modified. Although our method is restricted to programs with fixed modes, it may be preferable to DCGs under Prolog for some programs

    Investigation of design and execution alternatives for the committed choice non-deterministic logic languages

    Get PDF
    The general area of developing, applying and studying new and parallel models of computation is motivated by a need to overcome the limits of current Von Neumann based architectures. A key area of research in understanding how new technology can be applied to Al problem solving is through using logic languages. Logic programming languages provide a procedural interpretation for sentences of first order logic, mainly using a class of sentence called Horn clauses. Horn clauses are open to a wide variety of parallel evaluation models, giving possible speed-ups and alternative parallel models of execution. The research in this thesis is concerned with investigating one class of parallel logic language known as Committed Choice Non-Deterministic languages. The investigation considers the inherent parallel behaviour of Al programs implemented in the CCND languages and the effect of various alternatives open to language implementors and designers. This is achieved by considering how various Al programming techniques map to alternative language designs and the behaviour of these Al programs on alternative implementations of these languages. The aim of this work is to investigate how Al programming techniques are affected (qualitatively and quantitatively) by particular language features. The qualitative evaluation is a consideration of how Al programs can be mapped to the various CCND languages. The applications considered are general search algorithms (which focuses on the committed choice nature of the languages); chart parsing (which focuses on the differences between safe and unsafe languages); and meta-level inference (which focuses on the difference between deep and flat languages). The quantitative evaluation considers the inherent parallel behaviour of the resulting programs and the effect of possible implementation alternatives on this inherent behaviour. To carry out this quantitative evaluation we have implemented a system which improves on the current interpreter based evaluation systems. The new system has an improved model of execution and allows severa

    Natural language software registry (second edition)

    Get PDF

    Feasibility report: Delivering case-study based learning using artificial intelligence and gaming technologies

    Get PDF
    This document describes an investigation into the technical feasibility of a game to support learning based on case studies. Information systems students using the game will conduct fact-finding interviews with virtual characters. We survey relevant technologies in computational linguistics and games. We assess the applicability of the various approaches and propose an architecture for the game based on existing techniques. We propose a phased development plan for the development of the game

    Abductive speech act recognition, corporate agents and the COSMA system

    Get PDF
    This chapter presents an overview of the DISCO project\u27s solutions to several problems in natural language pragmatics. Its central focus is on relating utterances to intentions through speech act recognition. Subproblems include the incorporation of linguistic cues into the speech act recognition process, precise and efficient multiagent belief attribution models (Corporate Agents), and speech act representation and processing using Corporate Agents. These ideas are being tested within the COSMA appointment scheduling system, one application of the DISCO natural language interface. Abductive speech act processing in this environment is not far from realizing its potential for fully bidirectional implementation

    DFKI publications : the first four years ; 1990 - 1993

    Get PDF

    An Abstract Machine for Unification Grammars

    Full text link
    This work describes the design and implementation of an abstract machine, Amalia, for the linguistic formalism ALE, which is based on typed feature structures. This formalism is one of the most widely accepted in computational linguistics and has been used for designing grammars in various linguistic theories, most notably HPSG. Amalia is composed of data structures and a set of instructions, augmented by a compiler from the grammatical formalism to the abstract instructions, and a (portable) interpreter of the abstract instructions. The effect of each instruction is defined using a low-level language that can be executed on ordinary hardware. The advantages of the abstract machine approach are twofold. From a theoretical point of view, the abstract machine gives a well-defined operational semantics to the grammatical formalism. This ensures that grammars specified using our system are endowed with well defined meaning. It enables, for example, to formally verify the correctness of a compiler for HPSG, given an independent definition. From a practical point of view, Amalia is the first system that employs a direct compilation scheme for unification grammars that are based on typed feature structures. The use of amalia results in a much improved performance over existing systems. In order to test the machine on a realistic application, we have developed a small-scale, HPSG-based grammar for a fragment of the Hebrew language, using Amalia as the development platform. This is the first application of HPSG to a Semitic language.Comment: Doctoral Thesis, 96 pages, many postscript figures, uses pstricks, pst-node, psfig, fullname and a macros fil

    Lifecycle of neural semantic parsing

    Get PDF
    Humans are born with the ability to learn to perceive, comprehend and communicate with language. Computing machines, on the other hand, only understand programming languages. To bridge the gap between humans and computers, deep semantic parsers convert natural language utterances into machine-understandable logical forms. The technique has a wide range of applications ranging from spoken dialogue systems and natural language interfaces. This thesis focuses on neural network-based semantic parsing. Traditional semantic parsers function with a domain-specific grammar that pairs utterances and logical forms, and parse with a CKY-like algorithm in polynomial time. Recent advances in neural semantic parsing reformulate the task as a sequence-to- sequence learning problem. Neural semantic parsers parse a sentence in linear time, and reduce the need for domain-specific assumptions, grammar learning, and extensive feature engineering. But this modeling flexibility comes at a cost since it is no longer possible to interpret how meaning composition is performed, given that logical forms are structured objects (trees or graphs). Such knowledge plays a critical role in understanding modeling limitations so as to build better semantic parsers. Moreover, the sequence-to-sequence learning problem is fairly unconstrained, both in terms of the possible derivations to consider and in terms of the target logical forms which can be ill-formed or unexecutable. The first contribution of this thesis is an improved neural semantic parser, which produces syntactically valid logical forms following a transition system and grammar constrains. The transition system integrates the generation of domain-general (i.e., valid tree-structures and language-specific predicates) and domain-specific aspects (i.e., domain-specific predicates and entities) in a unified way. The model employs various neural attention mechanisms to handle mismatches between natural language and formal language—a central challenge in semantic parsing. Training data to semantic parsers typically consists of utterances paired with logical forms. Another challenge of semantic parsing concerns the annotation of logical forms, which is labor-intensive. To write down the correct logical form of an utterance, one not only needs to have expertise in the semantic formalism, but also has to ensure the logical form matches the utterance semantics. We tackle this challenge in two ways. On the one hand, we extend the neural semantic parser to a weakly-supervised setting within a parser-ranker framework. The weakly-supervised setup uses training data of utterance-denotation (e.g., question-answer) pairs, which are much easier to obtain and therefore allow to scale semantic parsers to complex domains. Our framework combines the advantages of conventional weakly-supervised semantic parsers and neural semantic parsing. Candidate logical forms are generated by a neural decoder and subsequently scored by a ranking component. We present methods to efficiently search for candidate logical forms which involve spurious ambiguity—some logical forms do not match utterance semantics but coincidentally execute to the correct denotation. They should be excluded from training. On the other hand, we focus on how to quickly engineer a practical neural semantic parser for closed domains, by directly reducing the annotation difficulty of utterance-logical form pairs. We develop an interface for efficiently collecting compositional utterance-logical form pairs and then leverage the data collection method to train neural semantic parsers. Our method provides an end-to-end solution for closed-domain semantic parsing given only an ontology. We also extend the end-to-end solution to handle sequential utterances simulating a non-interactive user session. Specifically, the data collection interface is modified to collect utterance sequences which exhibit various co-reference patterns. Then the neural semantic parser is extended to parse context-dependent utterances. In summary, this thesis covers the lifecycle of designing a neural semantic parser: from model design (i.e., how to model a neural semantic parser with an appropriate inductive bias), training (i.e., how to perform fully supervised and weakly supervised training for a neural semantic parser) to engineering (i.e., how to build a neural semantic parser from a domain ontology)
    • …
    corecore