30 research outputs found
Hearing faces: how the infant brain matches the face it sees with the speech it hears
Speech is not a purely auditory signal. From around 2 months of age, infants are able to correctly match the vowel they hear with the appropriate articulating face. However, there is no behavioral evidence of integrated audiovisual perception until 4 months of age, at the earliest, when an illusory percept can be created by the fusion of the auditory stimulus and of the facial cues (McGurk effect). To understand how infants initially match the articulatory movements they see with the sounds they hear, we recorded high-density ERPs in response to auditory vowels that followed a congruent or incongruent silently articulating face in 10-week-old infants. In a first experiment, we determined that auditory–visual integration occurs during the early stages of perception as in adults. The mismatch response was similar in timing and in topography whether the preceding vowels were presented visually or aurally. In the second experiment, we studied audiovisual integration in the linguistic (vowel perception) and nonlinguistic (gender perception) domain. We observed a mismatch response for both types of change at similar latencies. Their topographies were significantly different demonstrating that cross-modal integration of these features is computed in parallel by two different networks. Indeed, brain source modeling revealed that phoneme and gender computations were lateralized toward the left and toward the right hemisphere, respectively, suggesting that each hemisphere possesses an early processing bias. We also observed repetition suppression in temporal regions and repetition enhancement in frontal regions. These results underscore how complex and structured is the human cortical organization which sustains communication from the first weeks of life on
Error Signals from the Brain: 7th Mismatch Negativity Conference
The 7th Mismatch Negativity Conference presents the state of the art in methods, theory, and application (basic and clinical research) of the MMN (and related error signals of the brain). Moreover, there will be two pre-conference workshops: one on the design of MMN studies and the analysis and interpretation of MMN data, and one on the visual MMN (with 20 presentations). There will be more than 40 presentations on hot topics of MMN grouped into thirteen symposia, and about 130 poster presentations. Keynote lectures by Kimmo Alho, Angela D. Friederici, and Israel Nelken will round off the program by covering topics related to and beyond MMN
Do audio-visual motion cues promote segregation of auditory streams?
An audio-visual experiment using moving sound sources was designed to investigate whether the analysis of auditory scenes is modulated by synchronous presentation of visual information. Listeners were presented with an alternating sequence of two pure tones delivered by two separate sound sources. In different conditions, the two sound sources were either stationary or moving on random trajectories around the listener. Both the sounds and the movement trajectories were derived from recordings in which two humans were moving with loudspeakers attached to their heads. Visualized movement trajectories modeled by a computer animation were presented together with the sounds. In the main experiment, behavioral reports on sound organization were collected from young healthy volunteers. The proportion and stability of the different sound organizations were compared between the conditions in which the visualized trajectories matched the movement of the sound sources and when the two were independent of each other. The results corroborate earlier findings that separation of sound sources in space promotes segregation. However, no additional effect of auditory movement per se on the perceptual organization of sounds was obtained. Surprisingly, the presentation of movement-congruent visual cues did not strengthen the effects of spatial separation on segregating auditory streams. Our findings are consistent with the view that bistability in the auditory modality can occur independently from other modalities
The nature of novel word representations : computer mouse tracking shows evidence of immediate lexical engagement effects in adults
Simplistically, words are the mental bundling of a form and a referent. However, words also dynamically interact with one another in the cognitive system, and have other so-called ‘lexical properties’. For example, the word ‘dog’ will cue recognition of ‘dock’ by shared phonology, and ‘cat’, by shared semantics. Researchers have suggested that such lexical engagement between words emerges slowly, and with sleep. However, newer research suggests that this is not the case. Herein, seven experiments investigate this claim.Fast mapping (FM), a developmental word learning procedure, has been reported to promote lexical engagement before sleep in adults. Experiment 1 altered the task parameters and failed to replicate this finding. Experiment 2 attempted a methodological replication – again, no effect was found. It is concluded that the effect reported is not easily replicable.Other findings of pre-sleep lexical engagement were then considered using a novel methodology – computer mouse tracking. Experiments 3 and 4 developed optimal mouse tracking procedures and protocols for studying lexical engagement. Experiment 5 then applied this methodology to novel word learning, and found clear evidence of immediate lexical engagement. Experiment 6 provided evidence that participants were binding the word form to the referent in these pre-sleep lexical representations. Experiment 7 sought to strengthen this finding, but has been postponed due to the CoViD-19 pandemic.The results are discussed in the context of the distributed cohort model of speech perception, a complementary learning systems account of word learning, and differing abstractionist and episodic accounts of the lexicon. It is concluded that the results may be most clearly explained by an episodic lexicon, although there is a need to develop hybrid models, factoring in consolidation and abstraction for the efficient storage of representations in the long term
Time Distortions in Mind
Time Distortions in Mind brings together current research on temporal processing in clinical populations to elucidate the interdependence between perturbations in timing and disturbances in the mind and brain. For the student, the scientist, and the stepping-stone for further research. Readership: An excellent reference for the student and the scientist interested in aspects of temporal processing and abnormal psychology
Time Distortions in Mind
Time Distortions in Mind brings together current research on temporal processing in clinical populations to elucidate the interdependence between perturbations in timing and disturbances in the mind and brain. For the student, the scientist, and the stepping-stone for further research
Fixing fluency: Neurocognitive assessment of a dysfluent reading intervention
The ability to read is essential to attain society’s literacy demands. Unfortunately, a significant percentage of the population experiences major difficulties in mastering reading and spelling skills. Individuals diagnosed with developmental dyslexia are at severe risk for adverse academic, economic, and psychosocial consequences, thus requiring clinical intervention. To date, there is no effective remediation for the lack of reading fluency, which remains as the most persistent symptom in dyslexia. This thesis aims at identifying factors involved in the failure to develop a functional reading network as well as factors of treatment success in addressing the notorious ‘fluency barrier’ in dyslexia. The present work combines a theoretical framework of dyslexia based on the multisensory integration deficit with recent advances in our knowledge of the brain networks specialized for reading. This thesis uses a longitudinal design including both behavioral and neurophysiological measures in dyslexics at 3rd grade of school. Between measurements, we provide an intervention aimed at improving reading fluency by training automation of letter-speech sound mappings. The studies presented in this thesis contribute to our understanding of dyslexics’ deficits and their remediation
Recommended from our members
Investigating the role of Bayesian inference in duration perception
The brain generates predictions about the world based on our prior experiences. Such phenomena have been formally quantified through the framework of Bayesian perceptual inference. The popularity of the Bayesian framework as a theory of perception has increased greatly over the years, but there are still many questions that need to be addressed before we can ascertain whether perception can be classified as truly Bayesian. In this thesis, I investigate whether time perception follows the principles of Bayesian models of perception. The main questions I focused on are how the variability of prior expectations and individual differences in the ability to perceive durations accurately influence temporal estimation. Bayesian models suggest that the magnitude of biases towards the prior would increase if the variance of the prior decreases, but to date, this prediction has not been adequately investigated. Similarly, the theory also suggests that sensory precision, observer’s ability to detect small changes in stimulus magnitude, should also affect perceptual biases, with greater sensory precision resulting in a weaker bias towards the prior. In addition, I was also interested in investigating what brain processes give rise to the perceptual biases that observers experience in magnitude estimation tasks. To do this, across different experiments, I used EEG to investigate if the brain tracks observers’ subjective experience of duration, and eye-tracking to investigate the previously proposed role of dopamine in biasing duration estimation. Finally, I also investigated to what extent prior expectations and time perception, in general, are influenced by conscious awareness. Overall, the experiments presented in this thesis aim to further our understanding of how the brain constructs our perception of time and whether Bayesian frameworks constitute a useful tool for understanding perception in general