1,614 research outputs found
Bayesian Non-Exhaustive Classification A Case Study: Online Name Disambiguation using Temporal Record Streams
The name entity disambiguation task aims to partition the records of multiple
real-life persons so that each partition contains records pertaining to a
unique person. Most of the existing solutions for this task operate in a batch
mode, where all records to be disambiguated are initially available to the
algorithm. However, more realistic settings require that the name
disambiguation task be performed in an online fashion, in addition to, being
able to identify records of new ambiguous entities having no preexisting
records. In this work, we propose a Bayesian non-exhaustive classification
framework for solving online name disambiguation task. Our proposed method uses
a Dirichlet process prior with a Normal * Normal * Inverse Wishart data model
which enables identification of new ambiguous entities who have no records in
the training data. For online classification, we use one sweep Gibbs sampler
which is very efficient and effective. As a case study we consider
bibliographic data in a temporal stream format and disambiguate authors by
partitioning their papers into homogeneous groups. Our experimental results
demonstrate that the proposed method is better than existing methods for
performing online name disambiguation task.Comment: to appear in CIKM 201
Distantly Labeling Data for Large Scale Cross-Document Coreference
Cross-document coreference, the problem of resolving entity mentions across
multi-document collections, is crucial to automated knowledge base construction
and data mining tasks. However, the scarcity of large labeled data sets has
hindered supervised machine learning research for this task. In this paper we
develop and demonstrate an approach based on ``distantly-labeling'' a data set
from which we can train a discriminative cross-document coreference model. In
particular we build a dataset of more than a million people mentions extracted
from 3.5 years of New York Times articles, leverage Wikipedia for distant
labeling with a generative model (and measure the reliability of such
labeling); then we train and evaluate a conditional random field coreference
model that has factors on cross-document entities as well as mention-pairs.
This coreference model obtains high accuracy in resolving mentions and entities
that are not present in the training data, indicating applicability to
non-Wikipedia data. Given the large amount of data, our work is also an
exercise demonstrating the scalability of our approach.Comment: 16 pages, submitted to ECML 201
- …