7 research outputs found

    Intracranial markers of conscious face perception in humans

    Get PDF
    Investigations of the neural basis of consciousness have greatly benefited from protocols that involve the presentation of stimuli at perceptual threshold, enabling the assessment of the patterns of brain activity that correlate with conscious perception, independently of any changes in sensory input. However, the comparison between perceived and unperceived trials would be expected to reveal not only the core neural substrate of a particular conscious perception, but also aspects of brain activity that facilitate, hinder or tend to follow conscious perception. We take a step towards the resolution of these confounds by combining an analysis of neural responses observed during the presentation of faces partially masked by Continuous Flash Suppression, and those responses observed during the unmasked presentation of faces and other images in the same subjects. We employed multidimensional classifiers to decode physical properties of stimuli or perceptual states from spectrotemporal representations of electrocorticographic signals (1071 channels in 5 subjects). Neural activity in certain face responsive areas located in both the fusiform gyrus and in the lateral-temporal/inferior-parietal cortex discriminated seen vs. unseen faces in the masked paradigm and upright faces vs. other categories in the unmasked paradigm. However, only the former discriminated upright vs. inverted faces in the unmasked paradigm. Our results suggest a prominent role for the fusiform gyrus in the configural perception of faces, and possibly other objects that are holistically processed. More generally, we advocate comparative analysis of neural recordings obtained during different, but related, experimental protocols as a promising direction towards elucidating the functional specificities of the patterns of neural activation that accompany our conscious experiences

    Intracranial markers of conscious face perception in humans

    Get PDF
    Investigations of the neural basis of consciousness have greatly benefited from protocols that involve the presentation of stimuli at perceptual threshold, enabling the assessment of the patterns of brain activity that correlate with conscious perception, independently of any changes in sensory input. However, the comparison between perceived and unperceived trials would be expected to reveal not only the core neural substrate of a particular conscious perception, but also aspects of brain activity that facilitate, hinder or tend to follow conscious perception. We take a step towards the resolution of these confounds by combining an analysis of neural responses observed during the presentation of faces partially masked by Continuous Flash Suppression, and those responses observed during the unmasked presentation of faces and other images in the same subjects. We employed multidimensional classifiers to decode physical properties of stimuli or perceptual states from spectrotemporal representations of electrocorticographic signals (1071 channels in 5 subjects). Neural activity in certain face responsive areas located in both the fusiform gyrus and in the lateral-temporal/inferior-parietal cortex discriminated seen vs. unseen faces in the masked paradigm and upright faces vs. other categories in the unmasked paradigm. However, only the former discriminated upright vs. inverted faces in the unmasked paradigm. Our results suggest a prominent role for the fusiform gyrus in the configural perception of faces, and possibly other objects that are holistically processed. More generally, we advocate comparative analysis of neural recordings obtained during different, but related, experimental protocols as a promising direction towards elucidating the functional specificities of the patterns of neural activation that accompany our conscious experiences

    Time-varying effective EEG source connectivity: the optimization of model parameters*

    Get PDF
    Adaptive estimation methods based on general Kalman filter are powerful tools to investigate brain networks dynamics given the non-stationary nature of neural signals. These methods rely on two parameters, the model order p and adaptation constant c, which determine the resolution and smoothness of the time-varying multivariate autoregressive estimates. A sub-optimal filtering may present consistent biases in the frequency domain and temporal distortions, leading to fallacious interpretations. Thus, the performance of these methods heavily depends on the accurate choice of these two parameters in the filter design. In this work, we sought to define an objective criterion for the optimal choice of these parameters. Since residual- and information-based criteria are not guaranteed to reach an absolute minimum, we propose to study the partial derivatives of these functions to guide the choice of p and c. To validate the performance of our method, we used a dataset of human visual evoked potentials during face perception where the generation and propagation of information in the brain is well understood and a set of simulated data where the ground truth is available

    Decoding the Temporal Representation of Facial Expression in Face-selective Regions.

    Get PDF
    The ability of humans to discern facial expressions in a timely manner typically relies on distributed face-selective regions for rapid neural computations. To study the time course in regions of interest for this process, we used magnetoencephalography (MEG) to measure neural responses participants viewed facial expressions depicting seven types of emotions (happiness, sadness, anger, disgust, fear, surprise, and neutral). Analysis of the time-resolved decoding of neural responses in face-selective sources within the inferior parietal cortex (IP-faces), lateral occipital cortex (LO-faces), fusiform gyrus (FG-faces), and posterior superior temporal sulcus (pSTS-faces) revealed that facial expressions were successfully classified starting from ∼100 to 150 ms after stimulus onset. Interestingly, the LO-faces and IP-faces showed greater accuracy than FG-faces and pSTS-faces. To examine the nature of the information processed in these face-selective regions, we entered with facial expression stimuli into a convolutional neural network (CNN) to perform similarity analyses against human neural responses. The results showed that neural responses in the LO-faces and IP-faces, starting ∼100 ms after the stimuli, were more strongly correlated with deep representations of emotional categories than with image level information from the input images. Additionally, we observed a relationship between the behavioral performance and the neural responses in the LO-faces and IP-faces, but not in the FG-faces and lpSTS-faces. Together, these results provided a comprehensive picture of the time course and nature of information involved in facial expression discrimination across multiple face-selective regions, which advances our understanding of how the human brain processes facial expressions

    Signal Detection Analyses of the Relation of Prospective and Retrospective Metacognitive Judgments

    Get PDF
    Tip-of-the-tongue states (TOT) and feeling-of-knowing judgments (FOK) are metacognitive experiences about the possibility of future retrieval of information when recall fails. Studies show that experiencing a TOT or a high FOK increases the possibility of correct retrieval of missing information, which demonstrates metacognitive sensitivity. However, evidence for metacognitive sensitivity of TOT and FOK mainly derives from measures that conflate metacognitive sensitivity with metacognitive bias. Moreover, no study has evaluated the influence of TOT and FOK judgments on the unbiased metacognitive sensitivity of other metacognitive experiences and judgments, in this case, confidence judgments. In this study, I used general recognition theory (GRT) to provide a bias-free assessment of metacognitive sensitivity for TOT and FOK and to evaluate the influence of TOT and FOK on the metacognitive sensitivity of confidence judgments. In two experiments, I asked participants to perform a memory recall task. If recall failed, participants provided metacognitive judgments of TOT and FOK, memory recognition responses, and metacognitive judgements of confidence on those recognition responses. After collecting the behavioral data, I fit two different GRT models to the data to assess metacognitive sensitivity of TOT and FOK. Using estimated parameters of the models, I constructed two sensitivity vs. metacognition (SvM) curves, which represent sensitivity in the recognition task, as a function of strength of metacognitive experiences: an SvM curve for TOT and an SvM curve for FOK. In addition, to evaluate the influence of TOT and FOK on the metacognitive sensitivity of confidence judgments, I fit two different GRT models and constructed two additional SvM curves, which represents metacognitive sensitivity of confidence, as a function of strength of TOT and FOK judgments. The results of the GRT-based analyses showed that experiencing TOT and a high FOK are associated with an increase in sensitivity in the memory recognition task and an increase in metacognitive sensitivity of confidence judgments. These results were the first bias-free indication of metacognitive sensitivity of TOT and FOK judgments and the first report of influence of TOT and FOK on metacognitive sensitivity of confidence judgments

    In the face of consciousness: how emotion, orientation, and gaze modulate face perception

    Get PDF
    Human faces convey essential information for social behaviour, such as information about others’ mental states and intentions. Crucially, many studies have claimed that several facial features such as configural facial information, emotional expressions, and gaze direction modulate how faces gain access to perceptual awareness. However, the procedures employed in said studies suffer from multiple methodological issues and limitations. In a series of experiments, I tested whether configural facial features, emotional expressions, and gaze direction modulate how faces gain access to awareness. To achieve this, I used stringent procedures that allow measurement of perceptual sensitivity and decision criterion to the location and identity of faces. I used these measures to assess how long it takes faces to reach awareness as they overcome Continuous Flash Suppression – an interocular suppression technique that can render images invisible for several seconds. Using classical and Bayesian analyses, I found that configural face processing (which occurs for upright, but not inverted faces) promotes faces’ access to awareness. Similarly, faces making eye contact gain access to awareness faster than faces looking away. Contrary to past claims, however, I found that faces expressing negative emotional expressions (anger or fear) do not enter awareness faster than neutral expressions. In another series of experiments, I measured the minimal exposure durations required for configural facial processing, emotion processing, metacognition, and conscious access. To this end, I used a newly developed LCD tachistoscope that can present images with sub-millisecond precision and examined both behavioural (psychophysical) and neural (electroencephalography) markers of processing. I found that configural face processing promotes faces’ access to awareness by showing that upright faces require shorter exposure durations than inverted faces to be seen. Crucially, only around four milliseconds of exposure were required to find this advantage. Fearful expressions, however, do not gain access to awareness faster than neutral expressions. Evidence from neural markers expanded this by showing that the exposure duration required for configural facial processing is the same as that required for faces to reach conscious access. Finally, around six milliseconds of exposure were required for emotion processing. Together, these findings shed light on the factors that affect access of faces to awareness: configural facial information and gaze direction can modulate faces’ access to perceptual awareness; and such modulation is due to perceptual sensitivity rather than decision criterion. Furthermore, the perceptual processing of faces follows a hierarchical pattern: configural information precedes and facilitates access to awareness, and emotion processing follows awareness
    corecore