2,140 research outputs found
Efficient Normal-Form Parsing for Combinatory Categorial Grammar
Under categorial grammars that have powerful rules like composition, a simple
n-word sentence can have exponentially many parses. Generating all parses is
inefficient and obscures whatever true semantic ambiguities are in the input.
This paper addresses the problem for a fairly general form of Combinatory
Categorial Grammar, by means of an efficient, correct, and easy to implement
normal-form parsing technique. The parser is proved to find exactly one parse
in each semantic equivalence class of allowable parses; that is, spurious
ambiguity (as carefully defined) is shown to be both safely and completely
eliminated.Comment: 8 pages, LaTeX packaged with three .sty files, also uses cgloss4e.st
Structure preserving transformations on non-left-recursive grammars
We will be concerned with grammar covers, The first part of this paper presents a general framework for covers. The second part introduces a transformation from nonleft-recursive grammars to grammars in Greibach normal form. An investigation of the structure preserving properties of this transformation, which serves also as an illustration of our framework for covers, is presented
An Efficient Probabilistic Context-Free Parsing Algorithm that Computes Prefix Probabilities
We describe an extension of Earley's parser for stochastic context-free
grammars that computes the following quantities given a stochastic context-free
grammar and an input string: a) probabilities of successive prefixes being
generated by the grammar; b) probabilities of substrings being generated by the
nonterminals, including the entire string being generated by the grammar; c)
most likely (Viterbi) parse of the string; d) posterior expected number of
applications of each grammar production, as required for reestimating rule
probabilities. (a) and (b) are computed incrementally in a single left-to-right
pass over the input. Our algorithm compares favorably to standard bottom-up
parsing methods for SCFGs in that it works efficiently on sparse grammars by
making use of Earley's top-down control structure. It can process any
context-free rule format without conversion to some normal form, and combines
computations for (a) through (d) in a single algorithm. Finally, the algorithm
has simple extensions for processing partially bracketed inputs, and for
finding partial parses and their likelihoods on ungrammatical inputs.Comment: 45 pages. Slightly shortened version to appear in Computational
Linguistics 2
Unsupervised Extraction of Representative Concepts from Scientific Literature
This paper studies the automated categorization and extraction of scientific
concepts from titles of scientific articles, in order to gain a deeper
understanding of their key contributions and facilitate the construction of a
generic academic knowledgebase. Towards this goal, we propose an unsupervised,
domain-independent, and scalable two-phase algorithm to type and extract key
concept mentions into aspects of interest (e.g., Techniques, Applications,
etc.). In the first phase of our algorithm we propose PhraseType, a
probabilistic generative model which exploits textual features and limited POS
tags to broadly segment text snippets into aspect-typed phrases. We extend this
model to simultaneously learn aspect-specific features and identify academic
domains in multi-domain corpora, since the two tasks mutually enhance each
other. In the second phase, we propose an approach based on adaptor grammars to
extract fine grained concept mentions from the aspect-typed phrases without the
need for any external resources or human effort, in a purely data-driven
manner. We apply our technique to study literature from diverse scientific
domains and show significant gains over state-of-the-art concept extraction
techniques. We also present a qualitative analysis of the results obtained.Comment: Published as a conference paper at CIKM 201
Incremental Interpretation: Applications, Theory, and Relationship to Dynamic Semantics
Why should computers interpret language incrementally? In recent years
psycholinguistic evidence for incremental interpretation has become more and
more compelling, suggesting that humans perform semantic interpretation before
constituent boundaries, possibly word by word. However, possible computational
applications have received less attention. In this paper we consider various
potential applications, in particular graphical interaction and dialogue. We
then review the theoretical and computational tools available for mapping from
fragments of sentences to fully scoped semantic representations. Finally, we
tease apart the relationship between dynamic semantics and incremental
interpretation.Comment: Procs. of COLING 94, LaTeX (2.09 preferred), 8 page
- …