69 research outputs found

    Audio-visual synchrony and feature-selective attention co-amplify early visual processing

    Get PDF
    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space

    EEG-representational geometries and psychometric distortions in approximate numerical judgment

    Get PDF
    When judging the average value of sample stimuli (e.g., numbers) people tend to either over- or underweight extreme sample values, depending on task context. In a context of overweighting, recent work has shown that extreme sample values were overly represented also in neural signals, in terms of an anti-compressed geometry of number samples in multivariate electroencephalography (EEG) patterns. Here, we asked whether neural representational geometries may also reflect a relative underweighting of extreme values (i.e., compression) which has been observed behaviorally in a great variety of tasks. We used a simple experimental manipulation (instructions to average a single-stream or to compare dual-streams of samples) to induce compression or anti-compression in behavior when participants judged rapid number sequences. Model-based representational similarity analysis (RSA) replicated the previous finding of neural anti-compression in the dual-stream task, but failed to provide evidence for neural compression in the single-stream task, despite the evidence for compression in behavior. Instead, the results indicated enhanced neural processing of extreme values in either task, regardless of whether extremes were over- or underweighted in subsequent behavioral choice. We further observed more general differences in the neural representation of the sample information between the two tasks. Together, our results indicate a mismatch between sample-level EEG geometries and behavior, which raises new questions about the origin of common psychometric distortions, such as diminishing sensitivity for larger values

    Audio-visual synchrony and spatial attention enhance processing of dynamic visual stimulation independently and in parallel: A frequency-tagging study

    Get PDF
    The neural processing of a visual stimulus can be facilitated by attending to its position or by a co-occurring auditory tone. Using frequency-tagging, we investigated whether facilitation by spatial attention and audio-visual synchrony rely on similar neural processes. Participants attended to one of two flickering Gabor patches (14.17 and 17 Hz) located in opposite lower visual fields. Gabor patches further “pulsed” (i.e. showed smooth spatial frequency variations) at distinct rates (3.14 and 3.63 Hz). Frequency-modulating an auditory stimulus at the pulse-rate of one of the visual stimuli established audio-visual synchrony. Flicker and pulsed stimulation elicited stimulus-locked rhythmic electrophysiological brain responses that allowed tracking the neural processing of simultaneously presented Gabor patches. These steady-state responses (SSRs) were quantified in the spectral domain to examine visual stimulus processing under conditions of synchronous vs. asynchronous tone presentation and when respective stimulus positions were attended vs. unattended. Strikingly, unique patterns of effects on pulse- and flicker driven SSRs indicated that spatial attention and audiovisual synchrony facilitated early visual processing in parallel and via different cortical processes. We found attention effects to resemble the classical top-down gain effect facilitating both, flicker and pulse-driven SSRs. Audio-visual synchrony, in turn, only amplified synchrony-producing stimulus aspects (i.e. pulse-driven SSRs) possibly highlighting the role of temporally co-occurring sights and sounds in bottom-up multisensory integration

    Abstract neural representations of language during sentence comprehension: Evidence from MEG and Behaviour

    Get PDF

    Task relevance modulates the behavioural and neural effects of sensory predictions

    Get PDF
    The brain is thought to generate internal predictions to optimize behaviour. However, it is unclear whether predictions signalling is an automatic brain function or depends on task demands. Here, we manipulated the spatial/temporal predictability of visual targets, and the relevance of spatial/temporal information provided by auditory cues. We used magnetoencephalography (MEG) to measure participants' brain activity during task performance. Task relevance modulated the influence of predictions on behaviour: spatial/temporal predictability improved spatial/temporal discrimination accuracy, but not vice versa. To explain these effects, we used behavioural responses to estimate subjective predictions under an ideal-observer model. Model-based time-series of predictions and prediction errors (PEs) were associated with dissociable neural responses: predictions correlated with cue-induced beta-band activity in auditory regions and alpha-band activity in visual regions, while stimulus-bound PEs correlated with gamma-band activity in posterior regions. Crucially, task relevance modulated these spectral correlates, suggesting that current goals influence PE and prediction signalling

    Predictability is necessary for closed-loop visual feedback delay adaptation

    Get PDF
    Rohde M, van Dam L, Ernst MO. Predictability is necessary for closed-loop visual feedback delay adaptation. Journal of Vision. 2014;14(3):4.In case of delayed visual feedback during visuomotor tasks, like in some sluggish computer games, humans can modulate their behavior to compensate for the delay. However, opinions on the nature of this compensation diverge. Some studies suggest that humans adapt to feedback delays with lasting changes in motor behavior (aftereffects) and a recalibration of time perception. Other studies have shown little or no evidence for such semipermanent recalibration in the temporal domain. We hypothesize that predictability of the reference signal (target to be tracked) is necessary for semipermanent delay adaptation. To test this hypothesis, we trained participants with a 200 ms visual feedback delay in a visually guided manual tracking task, varying the predictability of the reference signal between conditions, but keeping reference motion and feedback delay constant. In Experiment 1, we focused on motor behavior. Only training in the predictable condition brings about all of the adaptive changes and aftereffects expected from delay adaptation. In Experiment 2, we used a synchronization task to investigate perceived simultaneity (perceptuomotor learning). Supporting the hypothesis, participants recalibrated subjective visuomotor simultaneity only when trained in the predictable condition. Such a shift in perceived simultaneity was also observed in Experiment 3, using an interval estimation task. These results show that delay adaptation in motor control can modulate the perceived temporal alignment of vision and kinesthetically sensed movement. The coadaptation of motor prediction and target prediction (reference extrapolation) seems necessary for such genuine delay adaptation. This offers an explanation for divergent results in the literature

    Emotion and imitation in early infant-parent interaction: a longitudinal and cross-cultural study

    Get PDF
    Following a brief introduction to the diverse views on the motives for imitation, a review of the literature is presented covering the following topics: early theories and observations concerning the origin and development of human imitation in infancy; recent theoretical models that have emerged from experimental studies of infant imitation and from naturalistic studies of imitation in infant -mother communication; and traditional and recent theoretical and empirical approaches to imitative phenomena in infant -father interaction. This review leads to the following conclusions:a) The failure of attempts to confirm certain ideas, hypotheses and suggestions built into the theories and strategies of earlier studies does not detract from their great contribution, which set the foundations upon which recent research is carried forward.b) Despite the different theoretical frameworks and the lack of a consensus as to the best method for investigating early imitative phenomena in experimental settings, neonatal imitation is now accepted as a fact.c) Imitative phenomena found in empirical studies focusing on infant -father interaction, as well as the relevant theoretical interpretations, are characterised by a contradiction; theory predicts bidirectional regulations, but studies employ an empirical approach that favours the view that regulation is only on the parental side.In this investigation, observations were made of thirty infants, fifteen from Greece and fifteen from Scotland. All were seen every 15 days interacting with their mothers and with their fathers at home, from the 8th to the 24th week of life. A total of 540 home recordings were made. Units of interaction that contained imitative episodes were subjected to microanalysis with the aid of specialized software, in a multi -media system that provides the capability for detection, recording, timing and signal analysis of the variables under consideration to an accuracy of 1 /25th of a second.The main findings may be summarised as follows: a) Imitation was evident, as early as the 8th week, irrespective of the country, the parent or the infant's sex. b) Cultural differences, reflecting the predominance of non -vocal and vocal imitative expressive behaviour in the two countries, were found. c) The developmental course of early imitative expressive behaviours was typically non -linear. d) Turn-taking imitative exchanges predominated over co-actions. e) Parents were found to imitate their infants more than vice versa. f) Regulation of emotion, either in the sense of emotional matching or of emotional attunement, proved to be the underlying motivating principle for both parental and infant imitations.The implications of these findings for understanding universal intersubjective nature of early imitation in infant -father and infant-mother interactions are discussed

    Selective attention and speech processing in the cortex

    Full text link
    In noisy and complex environments, human listeners must segregate the mixture of sound sources arriving at their ears and selectively attend a single source, thereby solving a computationally difficult problem called the cocktail party problem. However, the neural mechanisms underlying these computations are still largely a mystery. Oscillatory synchronization of neuronal activity between cortical areas is thought to provide a crucial role in facilitating information transmission between spatially separated populations of neurons, enabling the formation of functional networks. In this thesis, we seek to analyze and model the functional neuronal networks underlying attention to speech stimuli and find that the Frontal Eye Fields play a central 'hub' role in the auditory spatial attention network in a cocktail party experiment. We use magnetoencephalography (MEG) to measure neural signals with high temporal precision, while sampling from the whole cortex. However, several methodological issues arise when undertaking functional connectivity analysis with MEG data. Specifically, volume conduction of electrical and magnetic fields in the brain complicates interpretation of results. We compare several approaches through simulations, and analyze the trade-offs among various measures of neural phase-locking in the presence of volume conduction. We use these insights to study functional networks in a cocktail party experiment. We then construct a linear dynamical system model of neural responses to ongoing speech. Using this model, we are able to correctly predict which of two speakers is being attended by a listener. We then apply this model to data from a task where people were attending to stories with synchronous and scrambled videos of the speakers' faces to explore how the presence of visual information modifies the underlying neuronal mechanisms of speech perception. This model allows us to probe neural processes as subjects listen to long stimuli, without the need for a trial-based experimental design. We model the neural activity with latent states, and model the neural noise spectrum and functional connectivity with multivariate autoregressive dynamics, along with impulse responses for external stimulus processing. We also develop a new regularized Expectation-Maximization (EM) algorithm to fit this model to electroencephalography (EEG) data
    corecore