485 research outputs found

    Extracting low-dimensional psychological representations from convolutional neural networks

    Get PDF
    Deep neural networks are increasingly being used in cognitive modeling as a means of deriving representations for complex stimuli such as images. While the predictive power of these networks is high, it is often not clear whether they also offer useful explanations of the task at hand. Convolutional neural network representations have been shown to be predictive of human similarity judgments for images after appropriate adaptation. However, these high-dimensional representations are difficult to interpret. Here we present a method for reducing these representations to a low-dimensional space which is still predictive of similarity judgments. We show that these low-dimensional representations also provide insightful explanations of factors underlying human similarity judgments.Comment: Accepted to CogSci 202

    Learning a face space for experiments on human identity

    Get PDF
    Generative models of human identity and appearance have broad applicability to behavioral science and technology, but the exquisite sensitivity of human face perception means that their utility hinges on the alignment of the model's representation to human psychological representations and the photorealism of the generated images. Meeting these requirements is an exacting task, and existing models of human identity and appearance are often unworkably abstract, artificial, uncanny, or biased. Here, we use a variational autoencoder with an autoregressive decoder to learn a face space from a uniquely diverse dataset of portraits that control much of the variation irrelevant to human identity and appearance. Our method generates photorealistic portraits of fictive identities with a smooth, navigable latent space. We validate our model's alignment with human sensitivities by introducing a psychophysical Turing test for images, which humans mostly fail. Lastly, we demonstrate an initial application of our model to the problem of fast search in mental space to obtain detailed "police sketches" in a small number of trials.Comment: 10 figures. Accepted as a paper to the 40th Annual Meeting of the Cognitive Science Society (CogSci 2018). *JWS and JCP contributed equally to this submissio

    Modeling Human Categorization of Natural Images Using Deep Feature Representations

    Get PDF
    Over the last few decades, psychologists have developed sophisticated formal models of human categorization using simple artificial stimuli. In this paper, we use modern machine learning methods to extend this work into the realm of naturalistic stimuli, enabling human categorization to be studied over the complex visual domain in which it evolved and developed. We show that representations derived from a convolutional neural network can be used to model behavior over a database of >300,000 human natural image classifications, and find that a group of models based on these representations perform well, near the reliability of human judgments. Interestingly, this group includes both exemplar and prototype models, contrasting with the dominance of exemplar models in previous work. We are able to improve the performance of the remaining models by preprocessing neural network representations to more closely capture human similarity judgments.Comment: 13 pages, 7 figures, 6 tables. Preliminary work presented at CogSci 201

    Adapting Deep Network Features to Capture Psychological Representations

    Get PDF
    Deep neural networks have become increasingly successful atsolving classic perception problems such as object recognition,semantic segmentation, and scene understanding, often reach-ing or surpassing human-level accuracy. This success is duein part to the ability of DNNs to learn useful representationsof high-dimensional inputs, a problem that humans must alsosolve. We examine the relationship between the representa-tions learned by these networks and human psychological rep-resentations recovered from similarity judgments. We find thatdeep features learned in service of object classification accountfor a significant amount of the variance in human similarityjudgments for a set of animal images. However, these fea-tures do not capture some qualitative distinctions that are a keypart of human representations. To remedy this, we develop amethod for adapting deep features to align with human sim-ilarity judgments, resulting in image representations that canpotentially be used to extend the scope of psychological exper-iments

    Learning Hierarchical Visual Representations in Deep Neural Networks Using Hierarchical Linguistic Labels

    Full text link
    Modern convolutional neural networks (CNNs) are able to achieve human-level object classification accuracy on specific tasks, and currently outperform competing models in explaining complex human visual representations. However, the categorization problem is posed differently for these networks than for humans: the accuracy of these networks is evaluated by their ability to identify single labels assigned to each image. These labels often cut arbitrarily across natural psychological taxonomies (e.g., dogs are separated into breeds, but never jointly categorized as "dogs"), and bias the resulting representations. By contrast, it is common for children to hear both "dog" and "Dalmatian" to describe the same stimulus, helping to group perceptually disparate objects (e.g., breeds) into a common mental class. In this work, we train CNN classifiers with multiple labels for each image that correspond to different levels of abstraction, and use this framework to reproduce classic patterns that appear in human generalization behavior.Comment: 6 pages, 4 figures, 1 table. Accepted as a paper to the 40th Annual Meeting of the Cognitive Science Society (CogSci 2018

    Capturing human category representations by sampling in deep feature spaces

    Get PDF
    Understanding how people represent categories is a core problem in cognitive science. Decades of research have yielded a variety of formal theories of categories, but validating them with naturalistic stimuli is difficult. The challenge is that human category representations cannot be directly observed and running informative experiments with naturalistic stimuli such as images requires a workable representation of these stimuli. Deep neural networks have recently been successful in solving a range of computer vision tasks and provide a way to compactly represent image features. Here, we introduce a method to estimate the structure of human categories that combines ideas from cognitive science and machine learning, blending human-based algorithms with state-of-the-art deep image generators. We provide qualitative and quantitative results as a proof-of-concept for the method's feasibility. Samples drawn from human distributions rival those from state-of-the-art generative models in quality and outperform alternative methods for estimating the structure of human categories.Comment: 6 pages, 5 figures, 1 table. Accepted as a paper to the 40th Annual Meeting of the Cognitive Science Society (CogSci 2018
    • …
    corecore