Generative models of human identity and appearance have broad applicability
to behavioral science and technology, but the exquisite sensitivity of human
face perception means that their utility hinges on the alignment of the model's
representation to human psychological representations and the photorealism of
the generated images. Meeting these requirements is an exacting task, and
existing models of human identity and appearance are often unworkably abstract,
artificial, uncanny, or biased. Here, we use a variational autoencoder with an
autoregressive decoder to learn a face space from a uniquely diverse dataset of
portraits that control much of the variation irrelevant to human identity and
appearance. Our method generates photorealistic portraits of fictive identities
with a smooth, navigable latent space. We validate our model's alignment with
human sensitivities by introducing a psychophysical Turing test for images,
which humans mostly fail. Lastly, we demonstrate an initial application of our
model to the problem of fast search in mental space to obtain detailed "police
sketches" in a small number of trials.Comment: 10 figures. Accepted as a paper to the 40th Annual Meeting of the
Cognitive Science Society (CogSci 2018). *JWS and JCP contributed equally to
this submissio