Understanding how people represent categories is a core problem in cognitive
science. Decades of research have yielded a variety of formal theories of
categories, but validating them with naturalistic stimuli is difficult. The
challenge is that human category representations cannot be directly observed
and running informative experiments with naturalistic stimuli such as images
requires a workable representation of these stimuli. Deep neural networks have
recently been successful in solving a range of computer vision tasks and
provide a way to compactly represent image features. Here, we introduce a
method to estimate the structure of human categories that combines ideas from
cognitive science and machine learning, blending human-based algorithms with
state-of-the-art deep image generators. We provide qualitative and quantitative
results as a proof-of-concept for the method's feasibility. Samples drawn from
human distributions rival those from state-of-the-art generative models in
quality and outperform alternative methods for estimating the structure of
human categories.Comment: 6 pages, 5 figures, 1 table. Accepted as a paper to the 40th Annual
Meeting of the Cognitive Science Society (CogSci 2018