58 research outputs found

    Speech Cues Contribute to Audiovisual Spatial Integration

    Get PDF
    Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquist's illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral ‘what’ and dorsal ‘where’ pathways

    Spatially uninformative sounds increase sensitivity for visual motion change

    Get PDF
    It has recently been shown that spatially uninformative sounds can cause a visual stimulus to pop out from an array of similar distractor stimuli when that sound is presented in temporal proximity to a feature change in the visual stimulus. Until now, this effect has predominantly been demonstrated by using stationary stimuli. Here, we extended these results by showing that auditory stimuli can also improve the sensitivity of visual motion change detection. To accomplish this, we presented moving visual stimuli (small dots) on a computer screen. At a random moment during a trial, one of these stimuli could abruptly move in an orthogonal direction. Participants’ task was to indicate whether such an abrupt motion change occurred or not by making a corresponding button press. If a sound (a short 1,000 Hz tone pip) co-occurred with the abrupt motion change, participants were able to detect this motion change more frequently than when the sound was not present. Using measures derived from signal detection theory, we were able to demonstrate that the effect on accuracy was due to increased sensitivity rather than to changes in response bias

    The COGs (context, object, and goals) in multisensory processing

    Get PDF
    Our understanding of how perception operates in real-world environments has been substantially advanced by studying both multisensory processes and “top-down” control processes influencing sensory processing via activity from higher-order brain areas, such as attention, memory, and expectations. As the two topics have been traditionally studied separately, the mechanisms orchestrating real-world multisensory processing remain unclear. Past work has revealed that the observer’s goals gate the influence of many multisensory processes on brain and behavioural responses, whereas some other multisensory processes might occur independently of these goals. Consequently, other forms of top-down control beyond goal dependence are necessary to explain the full range of multisensory effects currently reported at the brain and the cognitive level. These forms of control include sensitivity to stimulus context as well as the detection of matches (or lack thereof) between a multisensory stimulus and categorical attributes of naturalistic objects (e.g. tools, animals). In this review we discuss and integrate the existing findings that demonstrate the importance of such goal-, object- and context-based top-down control over multisensory processing. We then put forward a few principles emerging from this literature review with respect to the mechanisms underlying multisensory processing and discuss their possible broader implications

    Emotional Voice and Emotional Body Postures Influence Each Other Independently of Visual Awareness

    Get PDF
    Multisensory integration may occur independently of visual attention as previously shown with compound face-voice stimuli. We investigated in two experiments whether the perception of whole body expressions and the perception of voices influence each other when observers are not aware of seeing the bodily expression. In the first experiment participants categorized masked happy and angry bodily expressions while ignoring congruent or incongruent emotional voices. The onset between target and mask varied from −50 to +133 ms. Results show that the congruency between the emotion in the voice and the bodily expressions influences audiovisual perception independently of the visibility of the stimuli. In the second experiment participants categorized the emotional voices combined with masked bodily expressions as fearful or happy. This experiment showed that bodily expressions presented outside visual awareness still influence prosody perception. Our experiments show that audiovisual integration between bodily expressions and affective prosody can take place outside and independent of visual awareness

    Cross-Modal Prediction in Speech Perception

    Get PDF
    Speech perception often benefits from vision of the speaker's lip movements when they are available. One potential mechanism underlying this reported gain in perception arising from audio-visual integration is on-line prediction. In this study we address whether the preceding speech context in a single modality can improve audiovisual processing and whether this improvement is based on on-line information-transfer across sensory modalities. In the experiments presented here, during each trial, a speech fragment (context) presented in a single sensory modality (voice or lips) was immediately continued by an audiovisual target fragment. Participants made speeded judgments about whether voice and lips were in agreement in the target fragment. The leading single sensory context and the subsequent audiovisual target fragment could be continuous in either one modality only, both (context in one modality continues into both modalities in the target fragment) or neither modalities (i.e., discontinuous). The results showed quicker audiovisual matching responses when context was continuous with the target within either the visual or auditory channel (Experiment 1). Critically, prior visual context also provided an advantage when it was cross-modally continuous (with the auditory channel in the target), but auditory to visual cross-modal continuity resulted in no advantage (Experiment 2). This suggests that visual speech information can provide an on-line benefit for processing the upcoming auditory input through the use of predictive mechanisms. We hypothesize that this benefit is expressed at an early level of speech analysis

    Audio-Visual Speech Timing Sensitivity Is Enhanced in Cluttered Conditions

    Get PDF
    Events encoded in separate sensory modalities, such as audition and vision, can seem to be synchronous across a relatively broad range of physical timing differences. This may suggest that the precision of audio-visual timing judgments is inherently poor. Here we show that this is not necessarily true. We contrast timing sensitivity for isolated streams of audio and visual speech, and for streams of audio and visual speech accompanied by additional, temporally offset, visual speech streams. We find that the precision with which synchronous streams of audio and visual speech are identified is enhanced by the presence of additional streams of asynchronous visual speech. Our data suggest that timing perception is shaped by selective grouping processes, which can result in enhanced precision in temporally cluttered environments. The imprecision suggested by previous studies might therefore be a consequence of examining isolated pairs of audio and visual events. We argue that when an isolated pair of cross-modal events is presented, they tend to group perceptually and to seem synchronous as a consequence. We have revealed greater precision by providing multiple visual signals, possibly allowing a single auditory speech stream to group selectively with the most synchronous visual candidate. The grouping processes we have identified might be important in daily life, such as when we attempt to follow a conversation in a crowded room

    Retrospective cohort study: Risk of gastrointestinal cancer in a symptomatic cohort after a complete colonoscopy: Role of faecal immunochemical test

    Get PDF
    BACKGROUND: Faecal immunochemical test (FIT) has been recommended to assess symptomatic patients for colorectal cancer (CRC) detection. Nevertheless, some conditions could theoretically favour blood originating in proximal areas of the gastrointestinal tract passing through the colon unmetabolized. A positive FIT result could be related to other gastrointestinal cancers (GIC). AIM: To assess the risk of GIC detection and related death in FIT-positive symptomatic patients (threshold 10 µg Hb/g faeces) without CRC. METHODS: Post hoc cohort analysis performed within two prospective diagnostic test studies evaluating the diagnostic accuracy of different FIT analytical systems for CRC and significant colonic lesion detection. Ambulatory patients with gastrointestinal symptoms referred consecutively for colonoscopy from primary and secondary healthcare, underwent a quantitative FIT before undergoing a complete colonoscopy. Patients without CRC were divided into two groups (positive and negative FIT) using the threshold of 10 µg Hb/g of faeces and data from follow-up were retrieved from electronic medical records of the public hospitals involved in the research. We determined the cumulative risk of GIC, CRC and upper GIC. Hazard rate (HR) was calculated adjusted by age, sex and presence of significant colonic lesion. RESULTS: We included 2709 patients without CRC and a complete baseline colonoscopy, 730 (26.9%) with FIT = 10 µgr Hb/gr. During a mean time of 45.5 ± 20.0 mo, a GIC was detected in 57 (2.1%) patients: An upper GIC in 35 (1.3%) and a CRC in 14 (0.5%). Thirty-six patients (1.3%) died due to GIC: 22 (0.8%) due to an upper GIC and 9 (0.3%) due to CRC. FIT-positive subjects showed a higher CRC risk (HR 3.8, 95%CI: 1.2-11.9) with no differences in GIC (HR 1.5, 95%CI: 0.8-2.7) or upper GIC risk (HR 1.0, 95%CI: 0.5-2.2). Patients with a positive FIT had only an increased risk of CRC-related death (HR 10.8, 95%CI: 2.1-57.1) and GIC-related death (HR 2.2, 95%CI: 1.1-4.3), with no differences in upper GIC-related death (HR 1.4, 95%CI: 0.6-3.3). An upper GIC was detected in 22 (0.8%) patients during the first year. Two variables were independently associated: anaemia (OR 5.6, 95%CI: 2.2-13.9) and age = 70 years (OR 2.7, 95%CI: 1.1-7.0). CONCLUSION: Symptomatic patients without CRC have a moderate risk increase in upper GIC, regardless of the FIT result. Patients with a positive FIT have an increased risk of post-colonoscopy CRC

    X chromosome inactivation does not necessarily determine the severity of the phenotype in Rett syndrome patients

    Get PDF
    Rett syndrome (RTT) is a severe neurological disorder usually caused by mutations in the MECP2 gene. Since the MECP2 gene is located on the X chromosome, X chromosome inactivation (XCI) could play a role in the wide range of phenotypic variation of RTT patients; however, classical methylation-based protocols to evaluate XCI could not determine whether the preferentially inactivated X chromosome carried the mutant or the wild-type allele. Therefore, we developed an allele-specific methylation-based assay to evaluate methylation at the loci of several recurrent MECP2 mutations. We analyzed the XCI patterns in the blood of 174 RTT patients, but we did not find a clear correlation between XCI and the clinical presentation. We also compared XCI in blood and brain cortex samples of two patients and found differences between XCI patterns in these tissues. However, RTT mainly being a neurological disease complicates the establishment of a correlation between the XCI in blood and the clinical presentation of the patients. Furthermore, we analyzed MECP2 transcript levels and found differences from the expected levels according to XCI. Many factors other than XCI could affect the RTT phenotype, which in combination could influence the clinical presentation of RTT patients to a greater extent than slight variations in the XCI pattern

    Semantic congruency and the Colavita visual dominance effect.

    No full text
    Participants presented with auditory, visual, or bimodal audiovisual stimuli in a speeded discrimination task, fail to respond to the auditory component of bimodal targets significantly more often than to the visual component, a phenomenon known as the Colavita visual dominance effect. Given that spatial and temporal factors have recently been shown to modulate the Colavita effect, the aim of the present study, was to investigate whether semantic congruency also modulates the effect. In the three experiments reported here, participants were presented with a version of the Colavita task in which the stimulus congruency between the auditory and visual components of the bimodal targets was manipulated. That is, the auditory and visual stimuli could refer to the same or different object (in Experiments 1 and 2) or audiovisual speech event (Experiment 3). Surprisingly, semantic/stimulus congruency had no effect on the magnitude of the Colavita effect in any of the experiments, although it exerted a significant effect on certain other aspects of participants' performance. This finding contrasts with the results of other recent studies showing that semantic/stimulus congruency can affect certain multisensory interactions

    Assessing the role of attention in the audiovisual integration of speech

    No full text
    Currently, one of the most controversial topics in the study of multisensory integration in humans (and in its implementation in the development of new technologies for human communication systems) concerns the question of whether or not attention is needed during (or can modulate) the integration of sensory signals that are presented in different sensory modalities. Here, we review the evidence on this question, focusing specifically on the integration of auditory and visual information during the perception of speech. Contrary to the mainstream view that has been prevalent for the last 30 years or so, recent studies have now started to reveal that attentional resources are, in fact, recruited during audiovisual multisensory integration, at least under certain conditions. Finally, considering all of the available evidence, we discuss the extent to which audiovisual speech perception should be considered to represent a 'special' case of audiovisual, and more generally, of multisensory integration. © 2009 Elsevier B.V. All rights reserved
    corecore