2,602 research outputs found

    Representation of faces in perirhinal cortex

    Get PDF
    The prevailing view of medial temporal lobe (MTL) functioning holds that its structures are dedicated to long-term declarative memory. Recent evidence challenges this view, suggesting that perirhinal cortex (PrC), which interfaces the MTL with the ventral visual pathway, supports highly integrated object representations that contribute to both recognition memory and perceptual discrimination. Here, I used functional magnetic resonance imaging to examine PrC activity, as well as its broader functional connectivity, during perceptual and mnemonic tasks involving faces, a stimulus class proposed to rely on integrated representations for discrimination. In Chapter 2, I revealed that PrC involvement was related to task demands that emphasized face individuation. Discrimination under these conditions is proposed to benefit from the uniqueness afforded by highly-integrated stimulus representations. Multivariate partial least squares analyses revealed that PrC, the fusiform face area (FFA), and the amygdala were part of a pattern of regions exhibiting preferential activity for tasks emphasizing stimulus individuation. In Chapter 3, I provided evidence of resting-state connectivity between face-selective aspects of PrC, the FFA, and amygdala. These findings point to a privileged functional relationship between these regions, consistent with task-related co- recruitment revealed in Chapter 2. In addition, the strength of resting-state connectivity was related to behavioral performance on a face discrimination task. These results suggest a mechanism by which PrC may participate in the representation of faces. In Chapter 4, I examined PrC connectivity during task contexts. I provided evidence that distinctions between tasks emphasizing recognition memory and perceptual discrimination demands are better reflected in the connectivity of PrC with other regions in the brain, rather than in the presence or absence of PrC activity. Further, this functional connectivity was related to behavioral performance for the memory task. Together, these findings indicate that mnemonic demands are not the sole arbiter of PrC involvement, counter to the prevailing view of MTL functioning. Instead, they highlight the importance of connectivity-based approaches in elucidating the contributions of PrC, and point to a role of PrC in the representation of faces in a manner that can support memory and perception, and that may apply to other object categories more broadly

    Representational geometry: integrating cognition, computation, and the brain

    Get PDF
    The cognitive concept of representation plays a key role in theories of brain information processing. However, linking neuronal activity to representational content and cognitive theory remains challenging. Recent studies have characterized the representational geometry of neural population codes by means of representational distance matrices, enabling researchers to compare representations across stages of processing and to test cognitive and computational theories. Representational geometry provides a useful intermediate level of description, capturing both the information represented in a neuronal population code and the format in which it is represented. We review recent insights gained with this approach in perception, memory, cognition, and action. Analyses of representational geometry can compare representations between models and the brain, and promise to explain brain computation as transformation of representational similarity structure

    Modeling biological face recognition with deep convolutional neural networks

    Full text link
    Deep convolutional neural networks (DCNNs) have become the state-of-the-art computational models of biological object recognition. Their remarkable success has helped vision science break new ground and recent efforts have started to transfer this achievement to research on biological face recognition. In this regard, face detection can be investigated by comparing face-selective biological neurons and brain areas to artificial neurons and model layers. Similarly, face identification can be examined by comparing in vivo and in silico multidimensional "face spaces". In this review, we summarize the first studies that use DCNNs to model biological face recognition. On the basis of a broad spectrum of behavioral and computational evidence, we conclude that DCNNs are useful models that closely resemble the general hierarchical organization of face recognition in the ventral visual pathway and the core face network. In two exemplary spotlights, we emphasize the unique scientific contributions of these models. First, studies on face detection in DCNNs indicate that elementary face selectivity emerges automatically through feedforward processing even in the absence of visual experience. Second, studies on face identification in DCNNs suggest that identity-specific experience and generative mechanisms facilitate this particular challenge. Taken together, as this novel modeling approach enables close control of predisposition (i.e., architecture) and experience (i.e., training data), it may be suited to inform long-standing debates on the substrates of biological face recognition.Comment: 41 pages, 2 figures, 1 tabl

    Spectators’ aesthetic experiences of sound and movement in dance performance

    Get PDF
    In this paper we present a study of spectators’ aesthetic experiences of sound and movement in live dance performance. A multidisciplinary team comprising a choreographer, neuroscientists and qualitative researchers investigated the effects of different sound scores on dance spectators. What would be the impact of auditory stimulation on kinesthetic experience and/or aesthetic appreciation of the dance? What would be the effect of removing music altogether, so that spectators watched dance while hearing only the performers’ breathing and footfalls? We investigated audience experience through qualitative research, using post-performance focus groups, while a separately conducted functional brain imaging (fMRI) study measured the synchrony in brain activity across spectators when they watched dance with sound or breathing only. When audiences watched dance accompanied by music the fMRI data revealed evidence of greater intersubject synchronisation in a brain region consistent with complex auditory processing. The audience research found that some spectators derived pleasure from finding convergences between two complex stimuli (dance and music). The removal of music and the resulting audibility of the performers’ breathing had a significant impact on spectators’ aesthetic experience. The fMRI analysis showed increased synchronisation among observers, suggesting greater influence of the body when interpreting the dance stimuli. The audience research found evidence of similar corporeally focused experience. The paper discusses possible connections between the findings of our different approaches, and considers the implications of this study for interdisciplinary research collaborations between arts and sciences

    Toward a social psychophysics of face communication

    Get PDF
    As a highly social species, humans are equipped with a powerful tool for social communication—the face, which can elicit multiple social perceptions in others due to the rich and complex variations of its movements, morphology, and complexion. Consequently, identifying precisely what face information elicits different social perceptions is a complex empirical challenge that has largely remained beyond the reach of traditional research methods. More recently, the emerging field of social psychophysics has developed new methods designed to address this challenge. Here, we introduce and review the foundational methodological developments of social psychophysics, present recent work that has advanced our understanding of the face as a tool for social communication, and discuss the main challenges that lie ahead

    Representational structure of fMRI/EEG responses to dynamic facial expressions

    Get PDF
    Face perception provides an excellent example of how the brain processes nuanced visual differences and trans-forms them into behaviourally useful representations of identities and emotional expressions. While a body of literature has looked into the spatial and temporal neural processing of facial expressions, few studies have used a dimensionally varying set of stimuli containing subtle perceptual changes. In the current study, we used 48 short videos varying dimensionally in their intensity and category (happy, angry, surprised) of expression. We measured both fMRI and EEG responses to these video clips and compared the neural response patterns to the predictions of models based on image features and models derived from behavioural ratings of the stimuli. In fMRI, the inferior frontal gyrus face area (IFG-FA) carried information related only to the intensity of the expres-sion, independent of image-based models. The superior temporal sulcus (STS), inferior temporal (IT) and lateral occipital (LO) areas contained information about both expression category and intensity. In the EEG, the coding of expression category and low-level image features were most pronounced at around 400 ms. The expression intensity model did not, however, correlate significantly at any EEG timepoint. Our results show a specific role for IFG-FA in the coding of expressions and suggest that it contains image and category invariant representations of expression intensity.Peer reviewe
    • …
    corecore