150 research outputs found

    Spatial and temporal (non)binding of audiovisual rhythms in sensorimotor synchronisation

    Get PDF
    All data are held in a public repository, available at OSF database (URL access: https://osf.io/2jr48/?view_only=17e3f6f57651418c980832e00d818072).Human movement synchronisation with moving objects strongly relies on visual input. However, auditory information also plays an important role, since real environments are intrinsically multimodal. We used electroencephalography (EEG) frequency tagging to investigate the selective neural processing and integration of visual and auditory information during motor tracking and tested the effects of spatial and temporal congruency between audiovisual modalities. EEG was recorded while participants tracked with their index finger a red flickering (rate fV = 15 Hz) dot oscillating horizontally on a screen. The simultaneous auditory stimulus was modulated in pitch (rate fA = 32 Hz) and lateralised between left and right audio channels to induce perception of a periodic displacement of the sound source. Audiovisual congruency was manipulated in terms of space in Experiment 1 (no motion, same direction or opposite direction), and timing in Experiment 2 (no delay, medium delay or large delay). For both experiments, significant EEG responses were elicited at fV and fA tagging frequencies. It was also hypothesised that intermodulation products corresponding to the nonlinear integration of visual and auditory stimuli at frequencies fV ± fA would be elicited, due to audiovisual integration, especially in Congruent conditions. However, these components were not observed. Moreover, synchronisation and EEG results were not influenced by congruency manipulations, which invites further exploration of the conditions which may modulate audiovisual processing and the motor tracking of moving objects.We thank Ashleigh Clibborn and Ayah Hammoud for their assistance with data collection. This work was supported by a grant from the Australian Research Council (DP170104322, DP220103047). OML is supported by the Portuguese Foundation for Science and Technology and the Portuguese Ministry of Science, Technology and Higher Education, through the national funds, within the scope of the Transitory Disposition of the Decree No. 57/2016, of 29 August, amended by Law No. 57/2017 of 19 July (Ref.: SFRH/BPD/72710/2010

    Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information

    Get PDF
    During communication in real-life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady-state evoked fields and investigated the integration of audiovisual information in a semantic context. We presented participants with videos of an actress uttering action verbs (auditory; tagged at 61 Hz) accompanied by a gesture (visual; tagged at 68 Hz, using a projector with a 1440 Hz refresh rate). Integration ease was manipulated by auditory factors (clear/degraded speech) and visual factors (congruent/incongruent gesture). We identified MEG spectral peaks at the individual (61/68 Hz) tagging frequencies. We furthermore observed a peak at the intermodulation frequency of the auditory and visually tagged signals (fvisual – fauditory = 7 Hz), specifically when integration was easiest (i.e., when speech was clear and accompanied by a congruent gesture). This intermodulation peak is a signature of nonlinear audiovisual integration, and was strongest in left inferior frontal gyrus and left temporal regions; areas known to be involved in speech-gesture integration. The enhanced power at the intermodulation frequency thus reflects the ease of integration and demonstrates that speech-gesture information interacts in higher-order language areas. Furthermore, we provide a proof-of-principle of the use of RIFT to study the integration of audiovisual stimuli, in relation to, for instance, semantic context

    Measuring nonlinear signal combination using EEG

    Get PDF
    Relatively little is known about the processes, both linear and nonlinear, by which signals are combined beyond V1. By presenting two stimulus components simultaneously, flickering at different temporal frequencies (frequency tagging) while measuring steady-state visual evoked potentials, we can assess responses to the individual components, including direct measurements of suppression on each other, and various nonlinear responses to their combination found at intermodulation frequencies. The result is a rather rich dataset of frequencies at which responses can be found. We presented pairs of sinusoidal gratings at different temporal frequencies, forming plaid patterns that were "coherent" (looking like a checkerboard) and "noncoherent" (looking like a pair of transparently overlaid gratings), and found clear intermodulation responses to compound stimuli, indicating nonlinear summation. This might have been attributed to cross-orientation suppression except that the pattern of intermodulation responses differed for coherent and noncoherent patterns, whereas the effects of suppression (measured at the component frequencies) did not. A two-stage model of nonlinear summation involving conjunction detection with a logical AND gate described the data well, capturing the difference between coherent and noncoherent plaids over a wide array of possible response frequencies. Multistimulus frequency-tagged EEG in combination with computational modeling may be a very valuable tool in studying the conjunction of these signals. In the current study the results suggest a second-order mechanism responding selectively to coherent plaid patterns

    Steady state evoked potential (SSEP) responses in the primary and secondary somatosensory cortices of anesthetized cats: Nonlinearity characterized by harmonic and intermodulation frequencies

    Full text link
    When presented with an oscillatory sensory input at a particular frequency, F [Hz], neural systems respond with the corresponding frequency, f [Hz], and its multiples. When the input includes two frequencies (F1 and F2) and they are nonlinearly integrated in the system, responses at intermodulation frequencies (i.e., n1*f1+n2*f2 [Hz], where n1 and n2 are nonzero integers) emerge. Utilizing these properties, the steady state evoked potential (SSEP) paradigm allows us to characterize linear and nonlinear neural computation performed in cortical neurocircuitry. Here, we analyzed the steady state evoked local field potentials (LFPs) recorded from the primary (S1) and secondary (S2) somatosensory cortex of anesthetized cats (maintained with alfaxalone) while we presented slow (F1 = 23Hz) and fast (F2 = 200Hz) somatosensory vibration to the contralateral paw pads and digits. Over 9 experimental sessions, we recorded LFPs from N = 1620 and N = 1008 bipolar-referenced sites in S1 and S2 using electrode arrays. Power spectral analyses revealed strong responses at 1) the fundamental (f1, f2), 2) its harmonic, 3) the intermodulation frequencies, and 4) broadband frequencies (50-150Hz). To compare the computational architecture in S1 and S2, we employed simple computational modeling. Our modeling results necessitate nonlinear computation to explain SSEP in S2 more than S1. Combined with our current analysis of LFPs, our paradigm offers a rare opportunity to constrain the computational architecture of hierarchical organization of S1 and S2 and to reveal how a large-scale SSEP can emerge from local neural population activities

    The SSVEP tracks attention, not consciousness, during perceptual filling-in

    Get PDF
    Research on the neural basis of conscious perception has almost exclusively shown that becoming aware of a stimulus leads to increased neural responses. By designing a novel form of perceptual filling-in (PFI) overlaid with a dynamic texture display, we frequency-tagged multiple disappearing targets as well as their surroundings. We show that in a PFI paradigm, the disappearance of a stimulus and subjective invisibility is associated with increases in neural activity, as measured with steady-state visually evoked potentials (SSVEPs), in electroencephalography (EEG). We also find that this increase correlates with alpha-band activity, a well-established neural measure of attention. These findings cast doubt on the direct relationship previously reported between the strength of neural activity and conscious perception, at least when measured with current tools, such as the SSVEP. Instead, we conclude that SSVEP strength more closely measures changes in attention.</p

    EEG Frequency Tagging Reveals the Integration of Form and Motion Cues into the Perception of Group Movement

    Get PDF
    The human brain has dedicated mechanisms for processing other people's movements. Previous research has revealed how these mechanisms contribute to perceiving the movements of individuals but has left open how we perceive groups of people moving together. Across three experiments, we test whether movement perception depends on the spatiotemporal relationships among the movements of multiple agents. In Experiment 1, we combine EEG frequency tagging with apparent human motion and show that posture and movement perception can be dissociated at harmonically related frequencies of stimulus presentation. We then show that movement but not posture processing is enhanced when observing multiple agents move in synchrony. Movement processing was strongest for fluently moving synchronous groups (Experiment 2) and was perturbed by inversion (Experiment 3). Our findings suggest that processing group movement relies on binding body postures into movements and individual movements into groups. Enhanced perceptual processing of movement synchrony may form the basis for higher order social phenomena such as group alignment and its social consequences

    Different rules for binocular combination of luminance flicker in cortical and subcortical pathways

    Get PDF
    How does the human brain combine information across the eyes? It has been known for many years that cortical normalization mechanisms implement ‘ocularity invariance’: equalizing neural responses to spatial patterns presented either monocularly or binocularly. Here, we used a novel combination of electrophysiology, psychophysics, pupillometry, and computational modeling to ask whether this invariance also holds for flickering luminance stimuli with no spatial contrast. We find dramatic violations of ocularity invariance for these stimuli, both in the cortex and also in the subcortical pathways that govern pupil diameter. Specifically, we find substantial binocular facilitation in both pathways with the effect being strongest in the cortex. Near-linear binocular additivity (instead of ocularity invariance) was also found using a perceptual luminance matching task. Ocularity invariance is, therefore, not a ubiquitous feature of visual processing, and the brain appears to repurpose a generic normalization algorithm for different visual functions by adjusting the amount of interocular suppression
    • …
    corecore