82 research outputs found
The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification
We present the Bayesian Case Model (BCM), a general framework for Bayesian
case-based reasoning (CBR) and prototype classification and clustering. BCM
brings the intuitive power of CBR to a Bayesian generative framework. The BCM
learns prototypes, the "quintessential" observations that best represent
clusters in a dataset, by performing joint inference on cluster labels,
prototypes and important features. Simultaneously, BCM pursues sparsity by
learning subspaces, the sets of features that play important roles in the
characterization of the prototypes. The prototype and subspace representation
provides quantitative benefits in interpretability while preserving
classification accuracy. Human subject experiments verify statistically
significant improvements to participants' understanding when using explanations
produced by BCM, compared to those given by prior art.Comment: Published in Neural Information Processing Systems (NIPS) 2014,
Neural Information Processing Systems (NIPS) 201
Understanding Actors and Evaluating Personae with Gaussian Embeddings
Understanding narrative content has become an increasingly popular topic.
Nonetheless, research on identifying common types of narrative characters, or
personae, is impeded by the lack of automatic and broad-coverage evaluation
methods. We argue that computationally modeling actors provides benefits,
including novel evaluation mechanisms for personae. Specifically, we propose
two actor-modeling tasks, cast prediction and versatility ranking, which can
capture complementary aspects of the relation between actors and the characters
they portray. For an actor model, we present a technique for embedding actors,
movies, character roles, genres, and descriptive keywords as Gaussian
distributions and translation vectors, where the Gaussian variance corresponds
to actors' versatility. Empirical results indicate that (1) the technique
considerably outperforms TransE (Bordes et al. 2013) and ablation baselines and
(2) automatically identified persona topics (Bamman, O'Connor, and Smith 2013)
yield statistically significant improvements in both tasks, whereas simplistic
persona descriptors including age and gender perform inconsistently, validating
prior research.Comment: Accepted at AAAI 201
Confounds and Consequences in Geotagged Twitter Data
Twitter is often used in quantitative studies that identify
geographically-preferred topics, writing styles, and entities. These studies
rely on either GPS coordinates attached to individual messages, or on the
user-supplied location field in each profile. In this paper, we compare these
data acquisition techniques and quantify the biases that they introduce; we
also measure their effects on linguistic analysis and text-based geolocation.
GPS-tagging and self-reported locations yield measurably different corpora, and
these linguistic differences are partially attributable to differences in
dataset composition by age and gender. Using a latent variable model to induce
age and gender, we show how these demographic variables interact with geography
to affect language use. We also show that the accuracy of text-based
geolocation varies with population demographics, giving the best results for
men above the age of 40.Comment: final version for EMNLP 201
Graph-Sparse LDA: A Topic Model with Structured Sparsity
Originally designed to model text, topic modeling has become a powerful tool
for uncovering latent structure in domains including medicine, finance, and
vision. The goals for the model vary depending on the application: in some
cases, the discovered topics may be used for prediction or some other
downstream task. In other cases, the content of the topic itself may be of
intrinsic scientific interest.
Unfortunately, even using modern sparse techniques, the discovered topics are
often difficult to interpret due to the high dimensionality of the underlying
space. To improve topic interpretability, we introduce Graph-Sparse LDA, a
hierarchical topic model that leverages knowledge of relationships between
words (e.g., as encoded by an ontology). In our model, topics are summarized by
a few latent concept-words from the underlying graph that explain the observed
words. Graph-Sparse LDA recovers sparse, interpretable summaries on two
real-world biomedical datasets while matching state-of-the-art prediction
performance
- …