5 research outputs found

    A data-driven investigation of human action representations

    Get PDF
    Understanding actions performed by others requires us to integrate different types of information about people, scenes, objects, and their interactions. What organizing dimensions does the mind use to make sense of this complex action space? To address this question, we collected intuitive similarity judgments across two large-scale sets of naturalistic videos depicting everyday actions. We used cross-validated sparse non-negative matrix factorization (NMF) to identify the structure underlying action similarity judgments. A low-dimensional representation, consisting of nine to ten dimensions, was sufficient to accurately reconstruct human similarity judgments. The dimensions were robust to stimulus set perturbations and reproducible in a separate odd-one-out experiment. Human labels mapped these dimensions onto semantic axes relating to food, work, and home life; social axes relating to people and emotions; and one visual axis related to scene setting. While highly interpretable, these dimensions did not share a clear one-to-one correspondence with prior hypotheses of action-relevant dimensions. Together, our results reveal a low-dimensional set of robust and interpretable dimensions that organize intuitive action similarity judgments and highlight the importance of data-driven investigations of behavioral representations

    Assessing Task-dependent Flexibility and the Temporal Dynamics of Object Categorization

    Get PDF
    When we are presented with everyday objects, we inherently and effortlessly sort them into different categories. Nonetheless, the mechanisms by which this process occurs are not fully understood. There are three main tiers of categories into which the taxonomic tree can be separated: superordinate (e.g. mammal), basic (e.g. cat) and subordinate (e.g. Siamese cat). It is widely accepted that we most readily label objects at the basic level (Rosch et al., 1976). However, would this change if we were forced to attend to the features of a subordinate category? This study investigates the flexibility of assigning objects to the basic and subordinate categories as well as the neural time course of object classification. While recording EEG, participants indicated whether two successive images belong to either the same basic or subordinate category. They performed these tasks in two separate sessions alternating the taxonomic level. The EEG data obtained was analyzed through representational dissimilarity matrices (RDMs) creating a method of comparison for the neural responses of the electrodes at each time point to determine which category levels are most similar in terms of brain activity. We also used a multivariate regression approach to assess the magnitude of the relationship between neural data and the taxonomic level. We observed that the earliest neural responses seem optimized for basic-level categorization even when participants performed a different task. Therefore, these results emphasize the primacy of the basic level and provide further evidence for the automatic nature of object processing

    The spatiotemporal neural dynamics of object recognition for natural images and line drawings

    Get PDF
    Drawings offer a simple and efficient way to communicate meaning. While line drawings capture only coarsely how objects look in reality, we still perceive them as resembling real-world objects. Previous work has shown that this perceived similarity is mirrored by shared neural representations for drawings and natural images, which suggests that similar mechanisms underlie the recognition of both. However, other work has proposed that representations of drawings and natural images become similar only after substantial processing has taken place, suggesting distinct mechanisms. To arbitrate between those alternatives, we measured brain responses resolved in space and time using fMRI and MEG, respectively, while participants viewed images of objects depicted as photographs, line drawings, or sketch-like drawings. Using multivariate decoding, we demonstrate that object category information emerged similarly fast and across overlapping regions in occipital and ventral-temporal cortex for all types of depiction, yet with smaller effects at higher levels of visual abstraction. In addition, cross-decoding between depiction types revealed strong generalization of object category information from early processing stages on. Finally, by combining fMRI and MEG data using representational similarity analysis, we found that visual information traversed similar processing stages for all types of depiction, yet with an overall stronger representation for photographs. Together our results demonstrate broad commonalities in the neural dynamics of object recognition across types of depiction, thus providing clear evidence for shared neural mechanisms underlying recognition of natural object images and abstract drawings

    The temporal evolution of conceptual object representations revealed through models of behavior, semantics and deep neural networks

    Get PDF
    Visual object representations are commonly thought to emerge rapidly, yet it has remained unclear to what extent early brain responses reflect purely low-level visual features of these objects and how strongly those features contribute to later categorical or conceptual representations. Here, we aimed to estimate a lower temporal bound for the emergence of conceptual representations by defining two criteria that characterize such representations: 1) conceptual object representations should generalize across different exemplars of the same object, and 2) these representations should reflect high-level behavioral judgments. To test these criteria, we compared magnetoencephalography (MEG) recordings between two groups of participants (n = 16 per group) exposed to different exemplar images of the same object concepts. Further, we disentangled low-level from high-level MEG responses by estimating the unique and shared contribution of models of behavioral judgments, semantics, and different layers of deep neural networks of visual object processing. We find that 1) both generalization across exemplars as well as generalization of object-related signals across time increase after 150 ms, peaking around 230 ms; 2) representations specific to behavioral judgments emerged rapidly, peaking around 160 ms. Collectively, these results suggest a lower bound for the emergence of conceptual object representations around 150 ms following stimulus onset
    corecore