14 research outputs found
A Lexicalized Tree Adjoining Grammar for English
This document describes a sizable grammar of English written in the TAG formalism and implemented for use with the XTAG system. This report and the grammar described herein supersedes the TAG grammar described in [Abeille` et al., 1990]. The English grammar described in this report is based on the TAG formalism developed [Joshi et al., 1975] which has been extended to include lexicalization ([Schabes et al., 1988]), and unification-based feature structures ([Vijay Shanker and Joshi, 1991]). The grammar discussed in this report extends the grammar presented in [Abeille` et al., in at least two ways. First, this grammar has more detailed linguistic analyses, and second, the grammar presented in this paper is fully implemented. The range of syntactic phenomena that can be handled is large and includes auxiliaries (including inversion), copula, raising and small clause constructions, topicalization, relative clauses, infinitives, gerunds, passives, adjuncts, it-clefts, wh-clefts, PRO contructions, noun-noun modifications, extraposition, determiner phrases, genitives, negation, noun-verb contractions, sentential adjuncts and imperatives. The XTAG grammar has been relatively stable since November 1993, although new analyses are still being added periodically
A Lexicalized Tree Adjoining Grammar for English
This document describes a sizable grammar of English written in the TAG
formalism and implemented for use with the XTAG system. This report and the
grammar described herein supersedes the TAG grammar described in an earlier
1995 XTAG technical report. The English grammar described in this report is
based on the TAG formalism which has been extended to include lexicalization,
and unification-based feature structures. The range of syntactic phenomena that
can be handled is large and includes auxiliaries (including inversion), copula,
raising and small clause constructions, topicalization, relative clauses,
infinitives, gerunds, passives, adjuncts, it-clefts, wh-clefts, PRO
constructions, noun-noun modifications, extraposition, determiner sequences,
genitives, negation, noun-verb contractions, sentential adjuncts and
imperatives. This technical report corresponds to the XTAG Release 8/31/98. The
XTAG grammar is continuously updated with the addition of new analyses and
modification of old ones, and an online version of this report can be found at
the XTAG web page at http://www.cis.upenn.edu/~xtag/Comment: 310 pages, 181 Postscript figures, uses 11pt, psfig.te
Incorporating Punctuation Into the Sentence Grammar: A Lexicalized Tree Adjoining Grammar Perspective
Punctuation helps us to structure, and thus to understand, texts. Many uses of punctuation straddle the line between syntax and discourse, because they serve to combine multiple propositions within a single orthographic sentence. They allow us to insert discourse-level relations at the level of a single sentence. Just as people make use of information from punctuation in processing what they read, computers can use information from punctuation in processing texts automatically. Most current natural language processing systems fail to take punctuation into account at all, losing a valuable source of information about the text. Those which do mostly do so in a superficial way, again failing to fully exploit the information conveyed by punctuation. To be able to make use of such information in a computational system, we must first characterize its uses and find a suitable representation for encoding them.
The work here focuses on extending a syntactic grammar to handle phenomena occurring within a single sentence which have punctuation as an integral component. Punctuation marks are treated as full-fledged lexical items in a Lexicalized Tree Adjoining Grammar, which is an extremely well-suited formalism for encoding punctuation in the sentence grammar. Each mark anchors its own elementary trees and imposes constraints on the surrounding lexical items. I have analyzed data representing a wide variety of constructions, and added treatments of them to the large English grammar which is part of the XTAG system. The advantages of using LTAG are that its elementary units are structured trees of a suitable size for stating the constraints we are interested in, and the derivation histories it produces contain information the discourse grammar will need about which elementary units have used and how they have been combined. I also consider in detail a few particularly interesting constructions where the sentence and discourse grammars meet-appositives, reported speech and uses of parentheses. My results confirm that punctuation can be used in analyzing sentences to increase the coverage of the grammar, reduce the ambiguity of certain word sequences and facilitate discourse-level processing of the texts
Adapting a general parser to a sublanguage
In this paper, we propose a method to adapt a general parser (Link Parser) to
sublanguages, focusing on the parsing of texts in biology. Our main proposal is
the use of terminology (identication and analysis of terms) in order to reduce
the complexity of the text to be parsed. Several other strategies are explored
and finally combined among which text normalization, lexicon and
morpho-guessing module extensions and grammar rules adaptation. We compare the
parsing results before and after these adaptations
Applications of Evolutionary Algorithms in Formal Languages
Starting from the model proposed by means of Grammatical Evolution, we extend the applicability of the parallel and cooperative searching processes of Evolutionary Algorithms to a new topic: Tree Adjoining Grammar parsing. We evolved derived trees using a string-tree-representation.We also used a linear matching function to compare the yield of a derived tree with a given input. The running tests presented several encouraging results. A post running analysis allowed us to propose several research directions for extending the currently known computational mechanisms in the mildly context sensitive class of languages
Learning Efficient Disambiguation
This dissertation analyses the computational properties of current
performance-models of natural language parsing, in particular Data Oriented
Parsing (DOP), points out some of their major shortcomings and suggests
suitable solutions. It provides proofs that various problems of probabilistic
disambiguation are NP-Complete under instances of these performance-models, and
it argues that none of these models accounts for attractive efficiency
properties of human language processing in limited domains, e.g. that frequent
inputs are usually processed faster than infrequent ones. The central
hypothesis of this dissertation is that these shortcomings can be eliminated by
specializing the performance-models to the limited domains. The dissertation
addresses "grammar and model specialization" and presents a new framework, the
Ambiguity-Reduction Specialization (ARS) framework, that formulates the
necessary and sufficient conditions for successful specialization. The
framework is instantiated into specialization algorithms and applied to
specializing DOP. Novelties of these learning algorithms are 1) they limit the
hypotheses-space to include only "safe" models, 2) are expressed as constrained
optimization formulae that minimize the entropy of the training tree-bank given
the specialized grammar, under the constraint that the size of the specialized
model does not exceed a predefined maximum, and 3) they enable integrating the
specialized model with the original one in a complementary manner. The
dissertation provides experiments with initial implementations and compares the
resulting Specialized DOP (SDOP) models to the original DOP models with
encouraging results.Comment: 222 page
Natural Language Processing (Almost) from Scratch
We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements
Rich Linguistic Structure from Large-Scale Web Data
The past two decades have shown an unexpected effectiveness of Web-scale data in natural language processing. Even the simplest models, when paired with unprecedented amounts of unstructured and unlabeled Web data, have been shown to outperform sophisticated ones. It has been argued that the effectiveness of Web-scale data has undermined the necessity of sophisticated modeling or laborious data set curation. In this thesis, we argue for and illustrate an alternative view, that Web-scale data not only serves to improve the performance of simple models, but also can allow the use of qualitatively more sophisticated models that would not be deployable otherwise, leading to even further performance gains.Engineering and Applied Science