6 research outputs found

    Top-down and bottom-up modulation of language related areas – An fMRI Study

    Get PDF
    BACKGROUND: One major problem for cognitive neuroscience is to describe the interaction between stimulus and task driven neural modulation. We used fMRI to investigate this interaction in the human brain. Ten male subjects performed a passive listening and a semantic categorization task in a factorial design. In both tasks, words were presented auditorily at three different rates. RESULTS: We found: (i) as word presentation rate increased hemodynamic responses increased bilaterally in the superior temporal gyrus including Heschl's gyrus (HG), the planum temporale (PT), and the planum polare (PP); (ii) compared to passive listening, semantic categorization produced increased bilateral activations in the ventral inferior frontal gyrus (IFG) and middle frontal gyrus (MFG); (iii) hemodynamic responses in the left dorsal IFG increased linearly with increasing word presentation rate only during the semantic categorization task; (iv) in the semantic task hemodynamic responses decreased bilaterally in the insula with increasing word presentation rates; and (v) in parts of the HG the hemodynamic response increased with increasing word presentation rates during passive listening more strongly. CONCLUSION: The observed "rate effect" in primary and secondary auditory cortex is in accord with previous findings and suggests that these areas are driven by low-level stimulus attributes. The bilateral effect of semantic categorization is also in accord with previous studies and emphasizes the role of these areas in semantic operations. The interaction between semantic categorization and word presentation in the left IFG indicates that this area has linguistic functions not present in the right IFG. Finally, we speculate that the interaction between semantic categorization and word presentation rates in HG and the insula might reflect an inhibition of the transfer of unnecessary information from the temporal to frontal regions of the brain

    Temporal Audiovisual Motion Prediction in 2D- vs. 3D-Environments

    No full text
    Predicting motion is essential for many everyday life activities, e.g., in road traffic. Previous studies on motion prediction failed to find consistent results, which might be due to the use of very different stimulus material and behavioural tasks. Here, we directly tested the influence of task (detection, extrapolation) and stimulus features (visual vs. audiovisual and three-dimensional vs. non-three-dimensional) on temporal motion prediction in two psychophysical experiments. In both experiments a ball followed a trajectory toward the observer and temporarily disappeared behind an occluder. In audiovisual conditions a moving white noise (congruent or non-congruent to visual motion direction) was presented concurrently. In experiment 1 the ball reappeared on a predictable or a non-predictable trajectory and participants detected when the ball reappeared. In experiment 2 the ball did not reappear after occlusion and participants judged when the ball would reach a specified position at two possible distances from the occluder (extrapolation task). Both experiments were conducted in three-dimensional space (using stereoscopic screen and polarised glasses) and also without stereoscopic presentation. Participants benefitted from visually predictable trajectories and concurrent sounds during detection. Additionally, visual facilitation was more pronounced for non-3D stimulation during detection task. In contrast, for a more complex extrapolation task group mean results indicated that auditory information impaired motion prediction. However, a post hoc cross-validation procedure (split-half) revealed that participants varied in their ability to use sounds during motion extrapolation. Most participants selectively profited from either near or far extrapolation distances but were impaired for the other one. We propose that interindividual differences in extrapolation efficiency might be the mechanism governing this effect. Together, our results indicate that both a realistic experimental environment and subject-specific differences modulate the ability of audiovisual motion prediction and need to be considered in future research

    Data_Sheet_1.pdf

    No full text
    <p>Predicting motion is essential for many everyday life activities, e.g., in road traffic. Previous studies on motion prediction failed to find consistent results, which might be due to the use of very different stimulus material and behavioural tasks. Here, we directly tested the influence of task (detection, extrapolation) and stimulus features (visual vs. audiovisual and three-dimensional vs. non-three-dimensional) on temporal motion prediction in two psychophysical experiments. In both experiments a ball followed a trajectory toward the observer and temporarily disappeared behind an occluder. In audiovisual conditions a moving white noise (congruent or non-congruent to visual motion direction) was presented concurrently. In experiment 1 the ball reappeared on a predictable or a non-predictable trajectory and participants detected when the ball reappeared. In experiment 2 the ball did not reappear after occlusion and participants judged when the ball would reach a specified position at two possible distances from the occluder (extrapolation task). Both experiments were conducted in three-dimensional space (using stereoscopic screen and polarised glasses) and also without stereoscopic presentation. Participants benefitted from visually predictable trajectories and concurrent sounds during detection. Additionally, visual facilitation was more pronounced for non-3D stimulation during detection task. In contrast, for a more complex extrapolation task group mean results indicated that auditory information impaired motion prediction. However, a post hoc cross-validation procedure (split-half) revealed that participants varied in their ability to use sounds during motion extrapolation. Most participants selectively profited from either near or far extrapolation distances but were impaired for the other one. We propose that interindividual differences in extrapolation efficiency might be the mechanism governing this effect. Together, our results indicate that both a realistic experimental environment and subject-specific differences modulate the ability of audiovisual motion prediction and need to be considered in future research.</p

    Possible anatomical pathways for short-latency multisensory integration processes in primary sensory cortices

    No full text
    corecore