124 research outputs found
Bayesian nonparametric models for name disambiguation and supervised learning
This thesis presents new Bayesian nonparametric models and approaches for their development,
for the problems of name disambiguation and supervised learning. Bayesian
nonparametric methods form an increasingly popular approach for solving problems
that demand a high amount of model flexibility. However, this field is relatively new,
and there are many areas that need further investigation. Previous work on Bayesian
nonparametrics has neither fully explored the problems of entity disambiguation and
supervised learning nor the advantages of nested hierarchical models. Entity disambiguation
is a widely encountered problem where different references need to be linked
to a real underlying entity. This problem is often unsupervised as there is no previously
known information about the entities. Further to this, effective use of Bayesian
nonparametrics offer a new approach to tackling supervised problems, which are frequently
encountered.
The main original contribution of this thesis is a set of new structured Dirichlet process
mixture models for name disambiguation and supervised learning that can also
have a wide range of applications. These models use techniques from Bayesian statistics,
including hierarchical and nested Dirichlet processes, generalised linear models,
Markov chain Monte Carlo methods and optimisation techniques such as BFGS. The
new models have tangible advantages over existing methods in the field as shown with
experiments on real-world datasets including citation databases and classification and
regression datasets.
I develop the unsupervised author-topic space model for author disambiguation that
uses free-text to perform disambiguation unlike traditional author disambiguation approaches.
The model incorporates a name variant model that is based on a nonparametric
Dirichlet language model. The model handles both novel unseen name variants and
can model the unknown authors of the text of the documents. Through this, the model
can disambiguate authors with no prior knowledge of the number of true authors in the
dataset. In addition, it can do this when the authors have identical names.
I use a model for nesting Dirichlet processes named the hybrid NDP-HDP. This
model allows Dirichlet processes to be clustered together and adds an additional level of
structure to the hierarchical Dirichlet process. I also develop a new hierarchical extension
to the hybrid NDP-HDP. I develop this model into the grouped author-topic model
for the entity disambiguation task. The grouped author-topic model uses clusters to model the co-occurrence of entities in documents, which can be interpreted as research
groups. Since this model does not require entities to be linked to specific words in a
document, it overcomes the problems of some existing author-topic models. The model
incorporates a new method for modelling name variants, so that domain-specific name
variant models can be used.
Lastly, I develop extensions to supervised latent Dirichlet allocation, a type of supervised
topic model. The keyword-supervised LDA model predicts document responses
more accurately by modelling the effect of individual words and their contexts directly.
The supervised HDP model has more model flexibility by using Bayesian nonparametrics
for supervised learning. These models are evaluated on a number of classification
and regression problems, and the results show that they outperform existing supervised
topic modelling approaches. The models can also be extended to use similar information
to the previous models, incorporating additional information such as entities and
document titles to improve prediction
Research in the Language, Information and Computation Laboratory of the University of Pennsylvania
This report takes its name from the Computational Linguistics Feedback Forum (CLiFF), an informal discussion group for students and faculty. However the scope of the research covered in this report is broader than the title might suggest; this is the yearly report of the LINC Lab, the Language, Information and Computation Laboratory of the University of Pennsylvania.
It may at first be hard to see the threads that bind together the work presented here, work by faculty, graduate students and postdocs in the Computer Science and Linguistics Departments, and the Institute for Research in Cognitive Science. It includes prototypical Natural Language fields such as: Combinatorial Categorial Grammars, Tree Adjoining Grammars, syntactic parsing and the syntax-semantics interface; but it extends to statistical methods, plan inference, instruction understanding, intonation, causal reasoning, free word order languages, geometric reasoning, medical informatics, connectionism, and language acquisition.
Naturally, this introduction cannot spell out all the connections between these abstracts; we invite you to explore them on your own. In fact, with this issue it’s easier than ever to do so: this document is accessible on the “information superhighway”. Just call up http://www.cis.upenn.edu/~cliff-group/94/cliffnotes.html
In addition, you can find many of the papers referenced in the CLiFF Notes on the net. Most can be obtained by following links from the authors’ abstracts in the web version of this report.
The abstracts describe the researchers’ many areas of investigation, explain their shared concerns, and present some interesting work in Cognitive Science. We hope its new online format makes the CLiFF Notes a more useful and interesting guide to Computational Linguistics activity at Penn
Aspects of Coherence for Entity Analysis
Natural language understanding is an important topic in natural language proces-
sing. Given a text, a computer program should, at the very least, be able to under-
stand what the text is about, and ideally also situate it in its extra-textual context
and understand what purpose it serves. What exactly it means to understand what a
text is about is an open question, but it is generally accepted that, at a minimum, un-
derstanding involves being able to answer questions like “Who did what to whom?
Where? When? How? And Why?”. Entity analysis, the computational analysis of
entities mentioned in a text, aims to support answering the questions “Who?” and
“Whom?” by identifying entities mentioned in a text. If the answers to “Where?”
and “When?” are specific, named locations and events, entity analysis can also pro-
vide these answers. Entity analysis aims to answer these questions by performing
entity linking, that is, linking mentions of entities to their corresponding entry in
a knowledge base, coreference resolution, that is, identifying all mentions in a text
that refer to the same entity, and entity typing, that is, assigning a label such as
Person to mentions of entities.
In this thesis, we study how different aspects of coherence can be exploited to
improve entity analysis. Our main contribution is a method that allows exploiting
knowledge-rich, specific aspects of coherence, namely geographic, temporal, and
entity type coherence. Geographic coherence expresses the intuition that entities
mentioned in a text tend to be geographically close. Similarly, temporal coherence
captures the intuition that entities mentioned in a text tend to be close in the tem-
poral dimension. Entity type coherence is based in the observation that in a text
about a certain topic, such as sports, the entities mentioned in it tend to have the
same or related entity types, such as sports team or athlete. We show how to integrate
features modeling these aspects of coherence into entity linking systems and esta-
blish their utility in extensive experiments covering different datasets and systems.
Since entity linking often requires computationally expensive joint, global optimi-
zation, we propose a simple, but effective rule-based approach that enjoys some of
the benefits of joint, global approaches, while avoiding some of their drawbacks.
To enable convenient error analysis for system developers, we introduce a tool for
visual analysis of entity linking system output. Investigating another aspect of co-
herence, namely the coherence between a predicate and its arguments, we devise a
distributed model of selectional preferences and assess its impact on a neural core-
ference resolution system. Our final contribution examines how multilingual entity
typing can be improved by incorporating subword information. We train and make
publicly available subword embeddings in 275 languages and show their utility in
a multilingual entity typing tas
Unrestricted Bridging Resolution
Anaphora plays a major role in discourse comprehension and accounts for the coherence of a text. In contrast to identity anaphora which indicates that a noun phrase refers back to the same entity introduced by previous descriptions in the discourse, bridging anaphora or associative anaphora links anaphors and antecedents via lexico-semantic, frame or encyclopedic relations.
In recent years, various computational approaches have been developed for bridging resolution. However, most of them only consider antecedent selection, assuming that bridging anaphora recognition has been performed. Moreover, they often focus on subproblems, e.g., only part-of bridging or definite noun phrase anaphora. This thesis addresses the problem of unrestricted bridging resolution, i.e.,
recognizing bridging anaphora and finding links to antecedents where bridging anaphors are not limited to definite noun phrases and semantic relations between
anaphors and their antecedents are not restricted to meronymic relations.
In this thesis, we solve the problem using a two-stage statistical model. Given all mentions in a document, the first stage predicts bridging anaphors by exploring a cascading collective classification model. We cast bridging anaphora recognition as a subtask of learning fine-grained information status (IS). Each mention in a text gets assigned one IS class, bridging being one possible class.
The model combines the binary classifiers for minority categories and a collective classifier for all categories in a cascaded way. It addresses the multi-class
imbalance problem (e.g., the wide variation of bridging anaphora and their relative rarity compared to many other IS classes) within a multi-class setting while
still keeping the strength of the collective classifier by investigating relational autocorrelation among several IS classes. The second stage finds the antecedents
for all predicted bridging anaphors at the same time by exploring a joint inference model. The approach models two mutually supportive tasks (i.e., bridging anaphora resolution and sibling anaphors clustering) jointly, on the basis of the observation that semantically/syntactically related anaphors are likely to be sibling anaphors, and hence share the same antecedent. Both components are based
on rich linguistically-motivated features and discriminatively trained on a corpus (ISNotes) where bridging is reliably annotated. Our approaches achieve substantial improvements over the reimplementations of previous systems for all three tasks, i.e., bridging anaphora recognition, bridging anaphora resolution and full
bridging resolution.
The work is – to our knowledge – the first bridging resolution system that handles the unrestricted phenomenon in a realistic setting. The methods in this dissertation were originally presented in Markert et al. (2012) and Hou et al. (2013a; 2013b; 2014). The thesis gives a detailed exposition, carrying out a thorough corpus analysis of bridging and conducting a detailed comparison of our models to others in the literature, and also presents several extensions of the aforementioned papers
- …