31 research outputs found
Recommended from our members
Learning with Joint Inference and Latent Linguistic Structure in Graphical Models
Constructing end-to-end NLP systems requires the processing of many types of linguistic information prior to solving the desired end task. A common approach to this problem is to construct a pipeline, one component for each task, with each system\u27s output becoming input for the next. This approach poses two problems. First, errors propagate, and, much like the childhood game of telephone , combining systems in this manner can lead to unintelligible outcomes. Second, each component task requires annotated training data to act as supervision for training the model. These annotations are often expensive and time-consuming to produce, may differ from each other in genre and style, and may not match the intended application.
In this dissertation we present a general framework for constructing and reasoning on joint graphical model formulations of NLP problems. Individual models are composed using weighted Boolean logic constraints, and inference is performed using belief propagation. The systems we develop are composed of two parts: one a representation of syntax, the other a desired end task (semantic role labeling, named entity recognition, or relation extraction). By modeling these problems jointly, both models are trained in a single, integrated process, with uncertainty propagated between them. This mitigates the accumulation of errors typical of pipelined approaches.
Additionally we propose a novel marginalization-based training method in which the error signal from end task annotations is used to guide the induction of a constrained latent syntactic representation. This allows training in the absence of syntactic training data, where the latent syntactic structure is instead optimized to best support the end task predictions. We find that across many NLP tasks this training method offers performance comparable to fully supervised training of each individual component, and in some instances improves upon it by learning latent structures which are more appropriate for the task
An Emergent Approach to Text Analysis Based on a Connectionist Model and the Web
In this paper, we present a method to provide proactive assistance in text checking, based on usage relationships between words structuralized on the Web. For a given sentence, the method builds a connectionist structure of relationships between word n-grams. Such structure is then parameterized by means of an unsupervised and language agnostic optimization process. Finally, the method provides a representation of the sentence that allows emerging the least prominent usage-based relational patterns, helping to easily find badly-written and unpopular text. The study includes the problem statement and its characterization in the literature, as well as the proposed solving approach and some experimental use
Multiword expression processing: A survey
Multiword expressions (MWEs) are a class of linguistic forms spanning conventional word boundaries that are both idiosyncratic and pervasive across different languages. The structure of linguistic processing that depends on the clear distinction between words and phrases has to be re-thought to accommodate MWEs. The issue of MWE handling is crucial for NLP applications, where it raises a number of challenges. The emergence of solutions in the absence of guiding principles motivates this survey, whose aim is not only to provide a focused review of MWE processing, but also to clarify the nature of interactions between MWE processing and downstream applications. We propose a conceptual framework within which challenges and research contributions can be positioned. It offers a shared understanding of what is meant by "MWE processing," distinguishing the subtasks of MWE discovery and identification. It also elucidates the interactions between MWE processing and two use cases: Parsing and machine translation. Many of the approaches in the literature can be differentiated according to how MWE processing is timed with respect to underlying use cases. We discuss how such orchestration choices affect the scope of MWE-aware systems. For each of the two MWE processing subtasks and for each of the two use cases, we conclude on open issues and research perspectives
Neural Combinatory Constituency Parsing
東京都立大学Tokyo Metropolitan University博士(情報科学)doctoral thesi
Graphical Models with Structured Factors, Neural Factors, and Approximation-aware Training
This thesis broadens the space of rich yet practical models for structured prediction. We introduce a general framework for modeling with four ingredients: (1) latent variables, (2) structural constraints, (3) learned (neural) feature representations of the inputs, and (4) training that takes the approximations made during inference into account. The thesis builds up to this framework through an empirical study of three NLP tasks: semantic role labeling, relation extraction, and dependency parsing -- obtaining state-of-the-art results on the former two. We apply the resulting graphical models with structured and neural factors, and approximation-aware learning to jointly model part-of-speech tags, a syntactic dependency parse, and semantic roles in a low-resource setting where the syntax is unobserved. We present an alternative view of these models as neural networks with a topology inspired by inference on graphical models that encode our intuitions about the data
A robust unification-based parser for Chinese natural language processing.
Chan Shuen-ti Roy.Thesis (M.Phil.)--Chinese University of Hong Kong, 2001.Includes bibliographical references (leaves 168-175).Abstracts in English and Chinese.Chapter 1. --- Introduction --- p.12Chapter 1.1. --- The nature of natural language processing --- p.12Chapter 1.2. --- Applications of natural language processing --- p.14Chapter 1.3. --- Purpose of study --- p.17Chapter 1.4. --- Organization of this thesis --- p.18Chapter 2. --- Organization and methods in natural language processing --- p.20Chapter 2.1. --- Organization of natural language processing system --- p.20Chapter 2.2. --- Methods employed --- p.22Chapter 2.3. --- Unification-based grammar processing --- p.22Chapter 2.3.1. --- Generalized Phase Structure Grammar (GPSG) --- p.27Chapter 2.3.2. --- Head-driven Phrase Structure Grammar (HPSG) --- p.31Chapter 2.3.3. --- Common drawbacks of UBGs --- p.33Chapter 2.4. --- Corpus-based processing --- p.34Chapter 2.4.1. --- Drawback of corpus-based processing --- p.35Chapter 3. --- Difficulties in Chinese language processing and its related works --- p.37Chapter 3.1. --- A glance at the history --- p.37Chapter 3.2. --- Difficulties in syntactic analysis of Chinese --- p.37Chapter 3.2.1. --- Writing system of Chinese causes segmentation problem --- p.38Chapter 3.2.2. --- Words serving multiple grammatical functions without inflection --- p.40Chapter 3.2.3. --- Word order of Chinese --- p.42Chapter 3.2.4. --- The Chinese grammatical word --- p.43Chapter 3.3. --- Related works --- p.45Chapter 3.3.1. --- Unification grammar processing approach --- p.45Chapter 3.3.2. --- Corpus-based processing approach --- p.48Chapter 3.4. --- Restatement of goal --- p.50Chapter 4. --- SERUP: Statistical-Enhanced Robust Unification Parser --- p.54Chapter 5. --- Step One: automatic preprocessing --- p.57Chapter 5.1. --- Segmentation of lexical tokens --- p.57Chapter 5.2. --- "Conversion of date, time and numerals" --- p.61Chapter 5.3. --- Identification of new words --- p.62Chapter 5.3.1. --- Proper nouns ´ؤ Chinese names --- p.63Chapter 5.3.2. --- Other proper nouns and multi-syllabic words --- p.67Chapter 5.4. --- Defining smallest parsing unit --- p.82Chapter 5.4.1. --- The Chinese sentence --- p.82Chapter 5.4.2. --- Breaking down the paragraphs --- p.84Chapter 5.4.3. --- Implementation --- p.87Chapter 6. --- Step Two: grammar construction --- p.91Chapter 6.1. --- Criteria in choosing a UBG model --- p.91Chapter 6.2. --- The grammar in details --- p.92Chapter 6.2.1. --- The PHON feature --- p.93Chapter 6.2.2. --- The SYN feature --- p.94Chapter 6.2.3. --- The SEM feature --- p.98Chapter 6.2.4. --- Grammar rules and features principles --- p.99Chapter 6.2.5. --- Verb phrases --- p.101Chapter 6.2.6. --- Noun phrases --- p.104Chapter 6.2.7. --- Prepositional phrases --- p.113Chapter 6.2.8. --- """Ba2"" and ""Bei4"" constructions" --- p.115Chapter 6.2.9. --- The terminal node S --- p.119Chapter 6.2.10. --- Summary of phrasal rules --- p.121Chapter 6.2.11. --- Morphological rules --- p.122Chapter 7. --- Step Three: resolving structural ambiguities --- p.128Chapter 7.1. --- Sources of ambiguities --- p.128Chapter 7.2. --- The traditional practices: an illustration --- p.132Chapter 7.3. --- Deficiency of current practices --- p.134Chapter 7.4. --- A new point of view: Wu (1999) --- p.140Chapter 7.5. --- Improvement over Wu (1999) --- p.142Chapter 7.6. --- Conclusion on semantic features --- p.146Chapter 8. --- "Implementation, performance and evaluation" --- p.148Chapter 8.1. --- Implementation --- p.148Chapter 8.2. --- Performance and evaluation --- p.150Chapter 8.2.1. --- The test set --- p.150Chapter 8.2.2. --- Segmentation of lexical tokens --- p.150Chapter 8.2.3. --- New word identification --- p.152Chapter 8.2.4. --- Parsing unit segmentation --- p.156Chapter 8.2.5. --- The grammar --- p.158Chapter 8.3. --- Overall performance of SERUP --- p.162Chapter 9. --- Conclusion --- p.164Chapter 9.1. --- Summary of this thesis --- p.164Chapter 9.2. --- Contribution of this thesis --- p.165Chapter 9.3. --- Future work --- p.166References --- p.168Appendix I --- p.176Appendix II --- p.181Appendix III --- p.18
Grammatical theory: From transformational grammar to constraint-based approaches. Second revised and extended edition.
This book is superseded by the third edition, available at http://langsci-press.org/catalog/book/255.
This book introduces formal grammar theories that play a role in current linguistic theorizing (Phrase Structure Grammar, Transformational Grammar/Government & Binding, Generalized Phrase Structure Grammar, Lexical Functional Grammar, Categorial Grammar, Head-Driven Phrase Structure Grammar, Construction Grammar, Tree Adjoining Grammar). The key assumptions are explained and it is shown how the respective theory treats arguments and adjuncts, the active/passive alternation, local reorderings, verb placement, and fronting of constituents over long distances. The analyses are explained with German as the object language.
The second part of the book compares these approaches with respect to their predictions regarding language acquisition and psycholinguistic plausibility. The nativism hypothesis, which assumes that humans posses genetically determined innate language-specific knowledge, is critically examined and alternative models of language acquisition are discussed. The second part then addresses controversial issues of current theory building such as the question of flat or binary branching structures being more appropriate, the question whether constructions should be treated on the phrasal or the lexical level, and the question whether abstract, non-visible entities should play a role in syntactic analyses. It is shown that the analyses suggested in the respective frameworks are often translatable into each other. The book closes with a chapter showing how properties common to all languages or to certain classes of languages can be captured.
The book is a translation of the German book Grammatiktheorie, which was published by Stauffenburg in 2010. The following quotes are taken from reviews:
With this critical yet fair reflection on various grammatical theories, Müller fills what was a major gap in the literature. Karen Lehmann, Zeitschrift für Rezensionen zur germanistischen Sprachwissenschaft, 2012
Stefan Müller’s recent introductory textbook, Grammatiktheorie, is an astonishingly comprehensive and insightful survey for beginning students of the present state of syntactic theory. Wolfgang Sternefeld und Frank Richter, Zeitschrift für Sprachwissenschaft, 2012
This is the kind of work that has been sought after for a while [...] The impartial and objective discussion offered by the author is particularly refreshing. Werner Abraham, Germanistik, 2012
This book is a new edition of http://langsci-press.org/catalog/book/25
Superseded: Grammatical theory: From transformational grammar to constraint-based approaches. Second revised and extended edition.
This book is superseded by the third edition, available at http://langsci-press.org/catalog/book/255.
This book introduces formal grammar theories that play a role in current linguistic theorizing (Phrase Structure Grammar, Transformational Grammar/Government & Binding, Generalized Phrase Structure Grammar, Lexical Functional Grammar, Categorial Grammar, Head-Driven Phrase Structure Grammar, Construction Grammar, Tree Adjoining Grammar). The key assumptions are explained and it is shown how the respective theory treats arguments and adjuncts, the active/passive alternation, local reorderings, verb placement, and fronting of constituents over long distances. The analyses are explained with German as the object language.
The second part of the book compares these approaches with respect to their predictions regarding language acquisition and psycholinguistic plausibility. The nativism hypothesis, which assumes that humans posses genetically determined innate language-specific knowledge, is critically examined and alternative models of language acquisition are discussed. The second part then addresses controversial issues of current theory building such as the question of flat or binary branching structures being more appropriate, the question whether constructions should be treated on the phrasal or the lexical level, and the question whether abstract, non-visible entities should play a role in syntactic analyses. It is shown that the analyses suggested in the respective frameworks are often translatable into each other. The book closes with a chapter showing how properties common to all languages or to certain classes of languages can be captured.
The book is a translation of the German book Grammatiktheorie, which was published by Stauffenburg in 2010. The following quotes are taken from reviews:
With this critical yet fair reflection on various grammatical theories, Müller fills what was a major gap in the literature. Karen Lehmann, Zeitschrift für Rezensionen zur germanistischen Sprachwissenschaft, 2012
Stefan Müller’s recent introductory textbook, Grammatiktheorie, is an astonishingly comprehensive and insightful survey for beginning students of the present state of syntactic theory. Wolfgang Sternefeld und Frank Richter, Zeitschrift für Sprachwissenschaft, 2012
This is the kind of work that has been sought after for a while [...] The impartial and objective discussion offered by the author is particularly refreshing. Werner Abraham, Germanistik, 2012
This book is a new edition of http://langsci-press.org/catalog/book/25
Superseded: Grammatical theory: From transformational grammar to constraint-based approaches. Second revised and extended edition.
This book is superseded by the third edition, available at http://langsci-press.org/catalog/book/255.
This book introduces formal grammar theories that play a role in current linguistic theorizing (Phrase Structure Grammar, Transformational Grammar/Government & Binding, Generalized Phrase Structure Grammar, Lexical Functional Grammar, Categorial Grammar, Head-Driven Phrase Structure Grammar, Construction Grammar, Tree Adjoining Grammar). The key assumptions are explained and it is shown how the respective theory treats arguments and adjuncts, the active/passive alternation, local reorderings, verb placement, and fronting of constituents over long distances. The analyses are explained with German as the object language.
The second part of the book compares these approaches with respect to their predictions regarding language acquisition and psycholinguistic plausibility. The nativism hypothesis, which assumes that humans posses genetically determined innate language-specific knowledge, is critically examined and alternative models of language acquisition are discussed. The second part then addresses controversial issues of current theory building such as the question of flat or binary branching structures being more appropriate, the question whether constructions should be treated on the phrasal or the lexical level, and the question whether abstract, non-visible entities should play a role in syntactic analyses. It is shown that the analyses suggested in the respective frameworks are often translatable into each other. The book closes with a chapter showing how properties common to all languages or to certain classes of languages can be captured.
The book is a translation of the German book Grammatiktheorie, which was published by Stauffenburg in 2010. The following quotes are taken from reviews:
With this critical yet fair reflection on various grammatical theories, Müller fills what was a major gap in the literature. Karen Lehmann, Zeitschrift für Rezensionen zur germanistischen Sprachwissenschaft, 2012
Stefan Müller’s recent introductory textbook, Grammatiktheorie, is an astonishingly comprehensive and insightful survey for beginning students of the present state of syntactic theory. Wolfgang Sternefeld und Frank Richter, Zeitschrift für Sprachwissenschaft, 2012
This is the kind of work that has been sought after for a while [...] The impartial and objective discussion offered by the author is particularly refreshing. Werner Abraham, Germanistik, 2012
This book is a new edition of http://langsci-press.org/catalog/book/25