4,224 research outputs found

    A Labelled Analytic Theorem Proving Environment for Categorial Grammar

    Full text link
    We present a system for the investigation of computational properties of categorial grammar parsing based on a labelled analytic tableaux theorem prover. This proof method allows us to take a modular approach, in which the basic grammar can be kept constant, while a range of categorial calculi can be captured by assigning different properties to the labelling algebra. The theorem proving strategy is particularly well suited to the treatment of categorial grammar, because it allows us to distribute the computational cost between the algorithm which deals with the grammatical types and the algebraic checker which constrains the derivation.Comment: 11 pages, LaTeX2e, uses examples.sty and a4wide.st

    Principles and Implementation of Deductive Parsing

    Get PDF
    We present a system for generating parsers based directly on the metaphor of parsing as deduction. Parsing algorithms can be represented directly as deduction systems, and a single deduction engine can interpret such deduction systems so as to implement the corresponding parser. The method generalizes easily to parsers for augmented phrase structure formalisms, such as definite-clause grammars and other logic grammar formalisms, and has been used for rapid prototyping of parsing algorithms for a variety of formalisms including variants of tree-adjoining grammars, categorial grammars, and lexicalized context-free grammars.Comment: 69 pages, includes full Prolog cod

    Efficient Normal-Form Parsing for Combinatory Categorial Grammar

    Full text link
    Under categorial grammars that have powerful rules like composition, a simple n-word sentence can have exponentially many parses. Generating all parses is inefficient and obscures whatever true semantic ambiguities are in the input. This paper addresses the problem for a fairly general form of Combinatory Categorial Grammar, by means of an efficient, correct, and easy to implement normal-form parsing technique. The parser is proved to find exactly one parse in each semantic equivalence class of allowable parses; that is, spurious ambiguity (as carefully defined) is shown to be both safely and completely eliminated.Comment: 8 pages, LaTeX packaged with three .sty files, also uses cgloss4e.st

    Logical Conceptualization of Knowledge on the Notion of Language Communication

    Get PDF
    The main objective of the paper is to provide a conceptual apparatus of a general logical theory of language communication. The aim of the paper is to outline a formal-logical theory of language in which the concepts of the phenomenon of language communication and language communication in general are defined and some conditions for their adequacy are formulated. The theory explicates the key notions of contemporary syntax, semantics, and pragmatics. The theory is formalized on two levels: token-level and type-level. As such, it takes into account the dual – token and type – ontological character of linguistic entities. The basic notions of the theory: language communication, meaning and interpretation are introduced on the second, type-level of formalization, and their required prior formalization of some of the notions introduced on the first, token-level; among others, the notion of an act of communication. Owing to the theory, it is possible to address the problems of adequacy of both empirical acts of communication and of language communication in general. All the conditions of adequacy of communication discussed in the presented paper, are valid for one-way communication (sender-recipient); nevertheless, they can also apply to the reverse direction of language communication (recipient-sender). Therefore, they concern the problem of two-way understanding in language communication

    Concurrent Lexicalized Dependency Parsing: The ParseTalk Model

    Full text link
    A grammar model for concurrent, object-oriented natural language parsing is introduced. Complete lexical distribution of grammatical knowledge is achieved building upon the head-oriented notions of valency and dependency, while inheritance mechanisms are used to capture lexical generalizations. The underlying concurrent computation model relies upon the actor paradigm. We consider message passing protocols for establishing dependency relations and ambiguity handling.Comment: 90kB, 7pages Postscrip

    Acquiring Word-Meaning Mappings for Natural Language Interfaces

    Full text link
    This paper focuses on a system, WOLFIE (WOrd Learning From Interpreted Examples), that acquires a semantic lexicon from a corpus of sentences paired with semantic representations. The lexicon learned consists of phrases paired with meaning representations. WOLFIE is part of an integrated system that learns to transform sentences into representations such as logical database queries. Experimental results are presented demonstrating WOLFIE's ability to learn useful lexicons for a database interface in four different natural languages. The usefulness of the lexicons learned by WOLFIE are compared to those acquired by a similar system, with results favorable to WOLFIE. A second set of experiments demonstrates WOLFIE's ability to scale to larger and more difficult, albeit artificially generated, corpora. In natural language acquisition, it is difficult to gather the annotated data needed for supervised learning; however, unannotated data is fairly plentiful. Active learning methods attempt to select for annotation and training only the most informative examples, and therefore are potentially very useful in natural language applications. However, most results to date for active learning have only considered standard classification tasks. To reduce annotation effort while maintaining accuracy, we apply active learning to semantic lexicons. We show that active learning can significantly reduce the number of annotated examples required to achieve a given level of performance
    • …
    corecore