193 research outputs found

    Face-selective responses in combined EEG/MEG recordings with fast periodic visual stimulation (FPVS).

    Get PDF
    Fast periodic visual stimulation (FPVS) allows the recording of objective brain responses of human face categorization (i.e., generalizable face-selective responses) with high signal-to-noise ratio. This approach has been successfully employed in a number of scalp electroencephalography (EEG) studies but has not been used with magnetoencephalography (MEG) yet, let alone with combined MEG/EEG recordings and distributed source estimation. Here, we presented various natural images of faces periodically (1.2 Hz) among natural images of objects (base frequency 6 Hz) whilst recording simultaneous EEG and MEG in 15 participants. Both measurement modalities showed face-selective responses at 1.2 Hz and harmonics across participants, with high and comparable signal-to-noise ratio (SNR) in about 3 min of stimulation. The correlation of face categorization responses between EEG and two MEG sensor types was lower than between the two MEG sensor types, indicating that the two sensor modalities provide independent information about the sources of face-selective responses. Face-selective EEG responses were right-lateralized as reported previously, and were numerically but non-significantly right-lateralized in MEG data. Distributed source estimation based on combined EEG/MEG signals confirmed a more bilateral face-selective response in visual brain regions located anteriorly to the common response to all stimuli at 6 Hz and harmonics. Conventional sensor and source space analyses of evoked responses in the time domain further corroborated this result. Our results demonstrate that FPVS in combination with simultaneously recorded EEG and MEG may serve as an efficient localizer paradigm for human face categorization

    Frequency-based neural discrimination in fast periodic visual stimulation

    Get PDF
    Humans capitalize on statistical cues to discriminate fundamental units of information within complex streams of sensory input. We sought neural evidence for this phenomenon by combining fast periodic visual stimulation (FPVS) and EEG recordings. Skilled readers were exposed to sequences of linguistic items with decreasing familiarity, presented at a fast rate and periodically interleaved with oddballs. Crucially, each sequence comprised stimuli of the same category, and the only distinction between base and oddball items was the frequency of occurrence of individual tokens within a stream. Frequency-domain analyses revealed robust neural responses at the oddball presentation rate in all conditions, reflecting the discrimination between two locally-emerged groups of items purely informed by token frequency. Results provide evidence for a fundamental frequency-tuned mechanism that operates under high temporal constraints and could underpin category bootstrapping. Concurrently, they showcase the potential of FPVS for providing a direct neural measure of implicit statistical learning

    Rapid extraction of emotion regularities from complex scenes in the human brain

    No full text
    Adaptive behavior requires the rapid extraction of behaviorally relevant information in the environment, with particular emphasis on emotional cues. However, the speed of emotional feature extraction from complex visual environments is largely undetermined. Here we use objective electrophysiological recordings in combination with frequency tagging to demonstrate that the extraction of emotional information from neutral, pleasant, or unpleasant naturalistic scenes can be completed at a presentation speed of 167 ms (i.e., 6 Hz) under high perceptual load. Emotional compared to neutral pictures evoked enhanced electrophysiological responses with distinct topographical activation patterns originating from different neural sources. Cortical facilitation in early visual cortex was also more pronounced for scenes with pleasant compared to unpleasant or neutral content, suggesting a positivity offset mechanism dominating under conditions of rapid scene processing. These results significantly advance our knowledge of complex scene processing in demonstrating rapid integrative content identification, particularly for emotional cues relevant for adaptive behavior in complex environments

    Present and past selves : a steady-state visual evoked potentials approach to self-face processing

    Get PDF
    The self-face has a prioritized status in the processing of incoming visual inputs. As the self-face changes over the lifespan, this stimulus seems to be well-suited for investigation of the self across time. Here, steady-state visual evoked potentials (SSVEP, oscillatory responses to periodic stimulation with a frequency that mirrors the frequency of stimulation) were used to investigate this topic. Different types of faces (present self, past self, close-other's, unknown, scrambled) flickered four times per second in two types of stimulation (‘identical', with the same image of a given type of face; ‘different', with different images of the same type of face). Each of the 10 stimulation sessions lasted 90 seconds and was repeated three times. EEG data were recorded and analyzed in 20 participants. In general, faces evoked higher SSVEP than scrambled faces. The impact of identical and different stimulation was similar for faces and scrambled faces: SSVEP to different stimuli (faces, scrambled faces) was enhanced in comparison to identical ones. Present self-faces evoked higher SSVEP responses than past self-faces in the different stimulation condition only. Thus, our results showed that the physical aspects of the present and past selves are differentiated on the neural level in the absence of an overt behavior

    Spatial and temporal (non)binding of audiovisual rhythms in sensorimotor synchronisation

    Get PDF
    All data are held in a public repository, available at OSF database (URL access: https://osf.io/2jr48/?view_only=17e3f6f57651418c980832e00d818072).Human movement synchronisation with moving objects strongly relies on visual input. However, auditory information also plays an important role, since real environments are intrinsically multimodal. We used electroencephalography (EEG) frequency tagging to investigate the selective neural processing and integration of visual and auditory information during motor tracking and tested the effects of spatial and temporal congruency between audiovisual modalities. EEG was recorded while participants tracked with their index finger a red flickering (rate fV = 15 Hz) dot oscillating horizontally on a screen. The simultaneous auditory stimulus was modulated in pitch (rate fA = 32 Hz) and lateralised between left and right audio channels to induce perception of a periodic displacement of the sound source. Audiovisual congruency was manipulated in terms of space in Experiment 1 (no motion, same direction or opposite direction), and timing in Experiment 2 (no delay, medium delay or large delay). For both experiments, significant EEG responses were elicited at fV and fA tagging frequencies. It was also hypothesised that intermodulation products corresponding to the nonlinear integration of visual and auditory stimuli at frequencies fV ± fA would be elicited, due to audiovisual integration, especially in Congruent conditions. However, these components were not observed. Moreover, synchronisation and EEG results were not influenced by congruency manipulations, which invites further exploration of the conditions which may modulate audiovisual processing and the motor tracking of moving objects.We thank Ashleigh Clibborn and Ayah Hammoud for their assistance with data collection. This work was supported by a grant from the Australian Research Council (DP170104322, DP220103047). OML is supported by the Portuguese Foundation for Science and Technology and the Portuguese Ministry of Science, Technology and Higher Education, through the national funds, within the scope of the Transitory Disposition of the Decree No. 57/2016, of 29 August, amended by Law No. 57/2017 of 19 July (Ref.: SFRH/BPD/72710/2010

    Human face and gaze perception is highly context specific and involves bottom-up and top-down neural processing

    Get PDF
    This review summarizes human perception and processing of face and gaze signals. Face and gaze signals are important means of non-verbal social communication. The review highlights that: (1) some evidence is available suggesting that the perception and processing of facial information starts in the prenatal period; (2) the perception and processing of face identity, expression and gaze direction is highly context specific, the effect of race and culture being a case in point. Culture affects by means of experiential shaping and social categorization the way in which information on face and gaze is collected and perceived; (3) face and gaze processing occurs in the so-called 'social brain'. Accumulating evidence suggests that the processing of facial identity, facial emotional expression and gaze involves two parallel and interacting pathways: a fast and crude subcortical route and a slower cortical pathway. The flow of information is bi-directional and includes bottom-up and top-down processing. The cortical networks particularly include the fusiform gyrus, superior temporal sulcus (STS), intraparietal sulcus, temporoparietal junction and medial prefrontal cortex

    Automatic, early color-specific neural responses to object color knowledge

    Get PDF
    Some familiar objects are associated with specific colors, e.g., rubber ducks with yellow. Whether and at what stage neural responses occur to these color associations remain open questions. We recorded frequency-tagged electroencephalogram (EEG) responses to periodic presentations of yellow-associated objects, shown among sequences of non-periodic blue-, red-, and green- associated objects. Both color and grayscale versions of the objects elicited yellow-specific responses, indicating an automatic activation of color knowledge from object shape. Follow-up experiments replicated these effects with green-specific responses, and demonstrated modulated responses for incongruent color/object associations. Importantly, the onset of color-specific responses was as early to grayscale as actually colored stimuli (before 100 ms), the latter additionally eliciting a conventional later response (approximately 140-230 ms) to actual stimulus color. This suggests that the neural representation of familiar objects includes both diagnostic shape and color properties, such that shape can elicit associated color-specific responses before actual color-specific responses occur

    Neural tracking and integration of 'self' and 'other' in improvised interpersonal coordination

    Get PDF
    Humans coordinate their movements with one another in a range of everyday activities and skill domains. Optimal joint performance requires the continuous anticipation of and adaptation to each other's movements, especially when actions are spontaneous rather than pre-planned. Here we employ dual-EEG and frequency-tagging techniques to investigate how the neural tracking of self- and other-generated movements supports interpersonal coordination during improvised motion. LEDs flickering at 5.7 and 7.7 Hz were attached to participants’ index fingers in 28 dyads as they produced novel patterns of synchronous horizontal forearm movements. EEG responses at these frequencies revealed enhanced neural tracking of self-generated movement when leading and of other-generated movements when following. A marker of self-other integration at 13.4 Hz (inter-modulation frequency of 5.7 and 7.7 Hz) peaked when no leader was designated, and mutual adaptation and movement synchrony were maximal. Furthermore, the amplitude of EEG responses reflected differences in the capacity of dyads to synchronize their movements, offering a neurophysiologically grounded perspective for understanding perceptual-motor mechanisms underlying joint action. © 2019 Elsevier Inc

    Decoding traces of memory during offline continuous electrical brain activity (EEG)

    Get PDF
    Continuous electroencephalogram (EEG) provides an excellent possibility to track memory traces from brain rhythmic activity and to study the underlying neural signatures of memory processes. To do so, a promising approach is to employ multivariate pattern classification (MVPC). These methods lend themselves very well to decode the information that resides within the whole distributed spatiotemporal patterns of activity. Using these methods, it is possible to detect traces of memory during sleep or wakefulness, which will reveal valuable insights about the memory function in these brain states. However, there are several methodological problems to decode memory traces from brain activity in paradigm-free (offline) periods. Continuous EEG is prone to elevated levels of noise and distortions and has much higher dimension than single-trial EEG, because of the longer recording time and lack of prior information about relevant time points that are informative for classification. In this case, detecting traces of memory involves searching the whole spatiotemporal feature space to find where memory representations reside. Such high-dimensional data, especially when signal-to-noise ratio and sample size are low, pose problems for classification and interpretation of MVPC result. To address these problems, in this thesis we aim: 1) to develop a proper classification algorithm that enables decoding of continuous EEG to detect memory traces in paradigm-free periods 2) to find EEG correlates of material-specific memory representations during offline periods of sleep and wakefulness, and 3) to provide a systematic method to interpret and validate the specificity of the MVPC results. In chapter 2, we used our MVPC method to detect the ‘when’ and ‘where’ of sleep-dependent reprocessing of memory traces in humans. Although replay of neuronal activity during sleep has been shown in animal experiments, its dynamics and underlying mechanism is still poorly understood in humans. We applied MVPC to human sleep EEG to see if the brain reprocesses previously learned information during sleep and looked for dynamics, neural signatures and relevance of different sleep stages to such process. Here, we developed a two-step classification algorithm that incorporates channel-based feature weighting as well as a tailored preprocessing scheme that is optimized to decode continuous EEG data for between-subject classification. With this method, we demonstrate that the specific content of previous learning episodes is reprocessed during post-learning sleep. We find that memory reprocessing peaks during two distinct periods in the night and both Rapid Eye Movement (REM) and non-REM sleep are involved in this process. To detect traces of short-term memory representations, we employed MVPC in chapter 3 to test whether electrical brain activity during short-term memory maintenance satisfies the necessary conditions for mnemonic representations; i.e. coding for memory content as well as retrieval success. We found that it is possible to decode the content maintained in memory during delay period and if it is subsequently recalled mainly from temporal, parietal, and frontal areas. Importantly, the only overlap between electrodes coding for retrieval success and memory content was found in parietal electrodes, indicating that a dedicated short-term memory representation resides in parietal cortex. Finally, chapter 4 aims at providing a systematic approach to validate the specificity of MVPC result. We investigate the consequences of the high sensitivity of MVPC for stimulus-related differences, which may confound estimation of class differences during decoding of cognitive concepts. We propose a method, which we call concept-response curve, to determine how much decoding performance is specific to the higher-order category processing and to lower-order stimulus processing. We show that this method can be used to quantify the relative contribution of concept- and stimulus-related components and to investigate the spatiotemporal dynamics of conceptual and perceptual processing

    Neural responses in a fast periodic visual stimulation paradigm reveal domain-general visual discrimination deficits in developmental prosopagnosia

    Get PDF
    We investigated selective impairments of visual identity discrimination in developmental prosopagnosia (DP), using a fast periodic identity oddball stimulation paradigm with electroencephalography (EEG). In Experiment 1, neural responses to unfamiliar face identity changes were strongly attenuated for individuals with DP as compared to Control participants, to the same extent for upright and inverted faces. This reduction of face identity discrimination responses, which was confirmed in Experiment 2, provides direct evidence for deficits in the visual processing of unfamiliar facial identity in DP. Importantly, Experiment 2 demonstrated that DPs showed attenuated neural responses to identity oddballs not only with face images, but also with non-face images (cars). This result strongly suggests that rapid identity-related visual processing impairments in DP are not restricted to faces, but also affect familiar classes of non-face stimuli. Visual discrimination deficits in DP do not appear to be face-specific. To account for these findings, we propose a new account of DP as a domain-general deficit in rapid visual discrimination
    • …
    corecore