530 research outputs found
Learning a face space for experiments on human identity
Generative models of human identity and appearance have broad applicability
to behavioral science and technology, but the exquisite sensitivity of human
face perception means that their utility hinges on the alignment of the model's
representation to human psychological representations and the photorealism of
the generated images. Meeting these requirements is an exacting task, and
existing models of human identity and appearance are often unworkably abstract,
artificial, uncanny, or biased. Here, we use a variational autoencoder with an
autoregressive decoder to learn a face space from a uniquely diverse dataset of
portraits that control much of the variation irrelevant to human identity and
appearance. Our method generates photorealistic portraits of fictive identities
with a smooth, navigable latent space. We validate our model's alignment with
human sensitivities by introducing a psychophysical Turing test for images,
which humans mostly fail. Lastly, we demonstrate an initial application of our
model to the problem of fast search in mental space to obtain detailed "police
sketches" in a small number of trials.Comment: 10 figures. Accepted as a paper to the 40th Annual Meeting of the
Cognitive Science Society (CogSci 2018). *JWS and JCP contributed equally to
this submissio
Modeling Human Categorization of Natural Images Using Deep Feature Representations
Over the last few decades, psychologists have developed sophisticated formal
models of human categorization using simple artificial stimuli. In this paper,
we use modern machine learning methods to extend this work into the realm of
naturalistic stimuli, enabling human categorization to be studied over the
complex visual domain in which it evolved and developed. We show that
representations derived from a convolutional neural network can be used to
model behavior over a database of >300,000 human natural image classifications,
and find that a group of models based on these representations perform well,
near the reliability of human judgments. Interestingly, this group includes
both exemplar and prototype models, contrasting with the dominance of exemplar
models in previous work. We are able to improve the performance of the
remaining models by preprocessing neural network representations to more
closely capture human similarity judgments.Comment: 13 pages, 7 figures, 6 tables. Preliminary work presented at CogSci
201
Learning Hierarchical Visual Representations in Deep Neural Networks Using Hierarchical Linguistic Labels
Modern convolutional neural networks (CNNs) are able to achieve human-level
object classification accuracy on specific tasks, and currently outperform
competing models in explaining complex human visual representations. However,
the categorization problem is posed differently for these networks than for
humans: the accuracy of these networks is evaluated by their ability to
identify single labels assigned to each image. These labels often cut
arbitrarily across natural psychological taxonomies (e.g., dogs are separated
into breeds, but never jointly categorized as "dogs"), and bias the resulting
representations. By contrast, it is common for children to hear both "dog" and
"Dalmatian" to describe the same stimulus, helping to group perceptually
disparate objects (e.g., breeds) into a common mental class. In this work, we
train CNN classifiers with multiple labels for each image that correspond to
different levels of abstraction, and use this framework to reproduce classic
patterns that appear in human generalization behavior.Comment: 6 pages, 4 figures, 1 table. Accepted as a paper to the 40th Annual
Meeting of the Cognitive Science Society (CogSci 2018
Capturing human category representations by sampling in deep feature spaces
Understanding how people represent categories is a core problem in cognitive
science. Decades of research have yielded a variety of formal theories of
categories, but validating them with naturalistic stimuli is difficult. The
challenge is that human category representations cannot be directly observed
and running informative experiments with naturalistic stimuli such as images
requires a workable representation of these stimuli. Deep neural networks have
recently been successful in solving a range of computer vision tasks and
provide a way to compactly represent image features. Here, we introduce a
method to estimate the structure of human categories that combines ideas from
cognitive science and machine learning, blending human-based algorithms with
state-of-the-art deep image generators. We provide qualitative and quantitative
results as a proof-of-concept for the method's feasibility. Samples drawn from
human distributions rival those from state-of-the-art generative models in
quality and outperform alternative methods for estimating the structure of
human categories.Comment: 6 pages, 5 figures, 1 table. Accepted as a paper to the 40th Annual
Meeting of the Cognitive Science Society (CogSci 2018
Using Machine Learning to Guide Cognitive Modeling: A Case Study in Moral Reasoning
Large-scale behavioral datasets enable researchers to use complex machine
learning algorithms to better predict human behavior, yet this increased
predictive power does not always lead to a better understanding of the behavior
in question. In this paper, we outline a data-driven, iterative procedure that
allows cognitive scientists to use machine learning to generate models that are
both interpretable and accurate. We demonstrate this method in the domain of
moral decision-making, where standard experimental approaches often identify
relevant principles that influence human judgments, but fail to generalize
these findings to "real world" situations that place these principles in
conflict. The recently released Moral Machine dataset allows us to build a
powerful model that can predict the outcomes of these conflicts while remaining
simple enough to explain the basis behind human decisions.Comment: Camera ready version for Cognitive Science Conferenc
Heterogeneity in susceptibility dictates the order of epidemiological models
The fundamental models of epidemiology describe the progression of an
infectious disease through a population using compartmentalized differential
equations, but do not incorporate population-level heterogeneity in infection
susceptibility. We show that variation strongly influences the rate of
infection, while the infection process simultaneously sculpts the
susceptibility distribution. These joint dynamics influence the force of
infection and are, in turn, influenced by the shape of the initial variability.
Intriguingly, we find that certain susceptibility distributions (the
exponential and the gamma) are unchanged through the course of the outbreak,
and lead naturally to power-law behavior in the force of infection; other
distributions often tend towards these "eigen-distributions" through the
process of contagion. The power-law behavior fundamentally alters predictions
of the long-term infection rate, and suggests that first-order epidemic models
that are parameterized in the exponential-like phase may systematically and
significantly over-estimate the final severity of the outbreak
Recommended from our members
Learning deep taxonomic priors for concept learning from few positive examples
Human concept learning is surprisingly robust, allowing forprecise generalizations given only a few positive examples.Bayesian formulations that account for this behavior requireelaborate, pre-specified priors, leaving much of the learningprocess unexplained. More recent models of concept learningbootstrap from deep representations, but the deep neural net-works are themselves trained using millions of positive and neg-ative examples. In machine learning, recent progress in meta-learning has provided large-scale learning algorithms that canlearn new concepts from a few examples, but these approachesstill assume access to implicit negative evidence. In this paper,we formulate a training paradigm that allows a meta-learningalgorithm to solve the problem of concept learning from fewpositive examples. The algorithm discovers a taxonomic prioruseful for learning novel concepts even from held-out supercat-egories and mimics human generalization behavior—the firstto do so without hand-specified domain knowledge or negativeexamples of a novel concept
- …