15 research outputs found

    Effect of Audiovisual Training on Monaural Spatial Hearing in Horizontal Plane

    Get PDF
    The article aims to test the hypothesis that audiovisual integration can improve spatial hearing in monaural conditions when interaural difference cues are not available. We trained one group of subjects with an audiovisual task, where a flash was presented in parallel with the sound and another group in an auditory task, where only sound from different spatial locations was presented. To check whether the observed audiovisual effect was similar to feedback, the third group was trained using the visual feedback paradigm. Training sessions were administered once per day, for 5 days. The performance level in each group was compared for auditory only stimulation on the first and the last day of practice. Improvement after audiovisual training was several times higher than after auditory practice. The group trained with visual feedback demonstrated a different effect of training with the improvement smaller than the group with audiovisual training. We conclude that cross-modal facilitation is highly important to improve spatial hearing in monaural conditions and may be applied to the rehabilitation of patients with unilateral deafness and after unilateral cochlear implantation

    Multisensory gain within and across hemispaces in simple and choice reaction time paradigms

    No full text
    Recent results on the nature and limits of multisensory enhancement are inconsistent when stimuli are presented across spatial regions. We presented visual, tactile and visuotactile stimuli to participants in two speeded response tasks. Each unisensory stimulus was presented to either the left or right hemispace, and multisensory stimuli were presented as either aligned (e.g. visual right/tactile right) or misaligned (e.g. visual right/tactile left). The first task was a simple reaction time (SRT) paradigm where participants responded to all stimulations irrespective of spatial position. Results showed that multisensory gain and coactivation were the same for spatially aligned and misaligned visuotactile stimulation. In the second task, a choice reaction time (CRT) paradigm where participants responded to right-sided stimuli only, misaligned stimuli yielded slower reaction times. No difference in multisensory gain was found between the SRT and CRT tasks for aligned stimulation. Overall, the results suggest that when spatial information is task-irrelevant, multisensory integration of spatially aligned and misaligned stimuli is equivalent. However, manipulating task requirements can alter this effect

    Impact of the spatial congruence of redundant targets on within-modal and cross-modal integration

    No full text
    Although the topic of sensory integration has raised increasing interest, the differing behavioral outcome of combining unisensory versus multisensory inputs has surprisingly only been scarcely investigated. In the present experiment, observers were required to respond as fast as possible to (1) lateralized visual or tactile targets presented alone, (2) double stimulation within the same modality or (3) double stimulation across modalities. Each combination was either delivered within the same hemispace (spatially aligned) or in different hemispaces (spatially misaligned). Results show that the redundancy gains (RG) obtained from the cross-modal conditions were far greater than those obtained from combinations of two visual or two tactile targets. Consistently, we observed that the reaction time distributions of cross-modal targets, but not those of within-modal targets, surpass the predicted reaction time distribution based on the summed probability distributions of each constituent stimulus presented alone. Moreover, we found that the spatial alignment of the targets did not influence the RG obtained in cross-modal conditions, whereas within-modal stimuli produced a greater RG when the targets where delivered in separate hemispaces. These results suggest that within-modal and cross-modal integration are not only distinguishable by the amount of facilitation they produce, but also by the spatial configuration under which this facilitation occurs. Our study strongly supports the notion that estimates of the same event that are more independent produce enhanced integrative gains

    Segregated audio–tactile events destabilize the bimanual coordination of distinct rhythms

    No full text
    We examined to what extent the CNS can efficiently bind together the perception of non-coincident multimodal events with coordinated movements. To do so, we selected a bimanual coordination with left–right asymmetry, which was, achieving 3:2 polyrhythmic movements. We asked participants to synchronize left and right fingers’ movements to events presented, respectively, to the left and to the right side. In two segregated conditions, sound was presented on one side at one frequency while touch was presented on the other side at the other frequency; thus, the left and right rhythms were paced via a distinct sensory modality. In the three control conditions, the stimuli on both sides were presented via the same sensory modality: sound, touch, or coincident sound and touch. Our aim was to contrast two opposing hypotheses: Sensory segregated pacing (1) stabilizes polyrhythmic coordination because it favors the distinction between the fast and the slow rhythm versus (2) destabilizes polyrhythmic coordination because it introduces a very strong asymmetry. We performed a parametric study in which the ability to maintain the polyrhythmic coordination was explored over a broad range of pacing rates. We found that switches from the polyrhythmic coordination to an isofrequency pattern took place only in the sensory segregated conditions, at the highest frequencies. Moreover, transitions were preceded by an increase in the variability of the synchronization of movement to stimuli. We therefore propose that the destabilization originating from the asymmetry between sensory modalities overrides the assumed segregation effect. We discuss the possible neuronal underpinnings of this failure of binding of movement to segregated sound and touch
    corecore