45 research outputs found

    Task modulation of single-neuron activity in the human amygdala and hippocampus

    Get PDF
    The human amygdala and hippocampus are critically involved in various processes in face perception. However, it remains unclear how task demands or evaluative contexts modulate processes underlying face perception. In this study, we employed two task instructions when participants viewed the same faces and recorded single-neuron activity from the human amygdala and hippocampus. We comprehensively analyzed task modulation for three key aspects of face processing and we found that neurons in the amygdala and hippocampus (1) encoded high-level social traits such as perceived facial trustworthiness and dominance and this response was modulated by task instructions; (2) encoded low-level facial features and demonstrated region-based feature coding, which was not modulated by task instructions; and (3) encoded fixations on salient face parts such as the eyes and mouth, which was not modulated by task instructions. Together, our results provide a comprehensive survey of task modulation of neural processes underlying face perception at the single-neuron level in the human amygdala and hippocampus

    A human single-neuron dataset for face perception

    Get PDF
    The human amygdala and hippocampus have long been associated with face perception. Here, we present a dataset of single-neuron activity in the human amygdala and hippocampus during face perception. We recorded 2082 neurons from the human amygdala and hippocampus when neurosurgical patients with intractable epilepsy performed a one-back task using natural face stimuli, which mimics natural face perception. Specifically, our data include (1) single-neuron activity from the amygdala (996 neurons) and hippocampus (1086 neurons), (2) eye movements (gaze position and pupil), (3) psychological assessment of the patients, and (4) social trait judgment ratings from a subset of patients and a large sample of participants from the general population. Together, our comprehensive dataset with a large population of neurons can facilitate multifaceted investigation of face perception with the highest spatial and temporal resolution currently available in humans

    A uniform human multimodal dataset for emotion perception and judgment

    Get PDF
    Face perception is a fundamental aspect of human social interaction, yet most research on this topic has focused on single modalities and specific aspects of face perception. Here, we present a comprehensive multimodal dataset for examining facial emotion perception and judgment. This dataset includes EEG data from 97 unique neurotypical participants across 8 experiments, fMRI data from 19 neurotypical participants, single-neuron data from 16 neurosurgical patients (22 sessions), eye tracking data from 24 neurotypical participants, behavioral and eye tracking data from 18 participants with ASD and 15 matched controls, and behavioral data from 3 rare patients with focal bilateral amygdala lesions. Notably, participants from all modalities performed the same task. Overall, this multimodal dataset provides a comprehensive exploration of facial emotion perception, emphasizing the importance of integrating multiple modalities to gain a holistic understanding of this complex cognitive process. This dataset serves as a key missing link between human neuroimaging and neurophysiology literature, and facilitates the study of neuropsychiatric populations

    Differences in the link between social trait judgment and socio-emotional experience in neurotypical and autistic individuals

    Get PDF
    Neurotypical (NT) individuals and individuals with autism spectrum disorder (ASD) make different judgments of social traits from others\u27 faces; they also exhibit different social emotional responses in social interactions. A common hypothesis is that the differences in face perception in ASD compared with NT is related to distinct social behaviors. To test this hypothesis, we combined a face trait judgment task with a novel interpersonal transgression task that induces measures social emotions and behaviors. ASD and neurotypical participants viewed a large set of naturalistic facial stimuli while judging them on a comprehensive set of social traits (e.g., warm, charismatic, critical). They also completed an interpersonal transgression task where their responsibility in causing an unpleasant outcome to a social partner was manipulated. The purpose of the latter task was to measure participants\u27 emotional (e.g., guilt) and behavioral (e.g., compensation) responses to interpersonal transgression. We found that, compared with neurotypical participants, ASD participants\u27 self-reported guilt and compensation tendency was less sensitive to our responsibility manipulation. Importantly, ASD participants and neurotypical participants showed distinct associations between self-reported guilt and judgments of criticalness from others\u27 faces. These findings reveal a novel link between perception of social traits and social emotional responses in ASD

    Feature-based encoding of face identity by single neurons in the human medial temporal lobe

    Get PDF
    Neurons in the human medial temporal lobe (MTL) that are selective for the identity of specific people are classically thought to encode identity invariant to visual features. However, it remains largely unknown how visual information from higher visual cortex is translated into a semantic representation of an individual person. Here, we show that some MTL neurons are selective to multiple different face identities on the basis of shared features that form clusters in the representation of a deep neural network trained to recognize faces. Contrary to prevailing views, we find that these neurons represent an individual’s face with feature-based encoding, rather than through association with concepts. The response of feature neurons did not depend on face identity nor face familiarity, and the region of feature space to which they are tuned predicted their response to new face stimuli. Our results provide critical evidence bridging the perception-driven representation of facial features in the higher visual cortex and the memory-driven representation of semantics in the MTL, which may form the basis for declarative memory

    SpikeSortingFromBCIdata

    No full text
    corecore