14 research outputs found

    Feeling the Beat: Bouncing Synchronization to Vibrotactile Music in Hearing and Early Deaf People

    Get PDF
    The ability to dance relies on the ability to synchronize movements to a perceived musical beat. Typically, beat synchronization is studied with auditory stimuli. However, in many typical social dancing situations, music can also be perceived as vibrations when objects that generate sounds also generate vibrations. This vibrotactile musical perception is of particular relevance for deaf people, who rely on non-auditory sensory information for dancing. In the present study, we investigated beat synchronization to vibrotactile electronic dance music in hearing and deaf people. We tested seven deaf and 14 hearing individuals on their ability to bounce in time with the tempo of vibrotactile stimuli (no sound) delivered through a vibrating platform. The corresponding auditory stimuli (no vibrations) were used in an additional condition in the hearing group. We collected movement data using a camera-based motion capture system and subjected it to a phase-locking analysis to assess synchronization quality. The vast majority of participants were able to precisely time their bounces to the vibrations, with no difference in performance between the two groups. In addition, we found higher performance for the auditory condition compared to the vibrotactile condition in the hearing group. Our results thus show that accurate tactile-motor synchronization in a dance-like context occurs regardless of auditory experience, though auditory-motor synchronization is of superior quality

    Spatial processing in the auditory cortex for stream segregation and localization

    No full text

    Activity in human auditory cortex represents spatial separation between concurrent sounds

    Get PDF
    The primary and posterior auditory cortex (AC) are known for their sensitivity to spatial information, but how this information is processed is not yet understood. AC that is sensitive to spatial manipulations is also modulated by the number of auditory streams present in a scene (Smith et al., 2010), suggesting that spatial and nonspatial cues are integrated for stream segregation. We reasoned that, if this is the case, then it is the distance between sounds rather than their absolute positions that is essential. To test this hypothesis, we measured human brain activity in response to spatially separated concurrent sounds with fMRI at 7 tesla in five men and five women. Stimuli were spatialized amplitude-modulated broadband noises recorded for each participant via in-ear microphones before scanning. Using a linear support vector machine classifier, we investigated whether sound location and/or location plus spatial separation between sounds could be decoded from the activity in Heschl's gyrus and the planum temporale. The classifier was successful only when comparing patterns associated with the conditions that had the largest difference in perceptual spatial separation. Our pattern of results suggests that the representation of spatial separation is not merely the combination of single locations, but rather is an independent feature of the auditory scene. SIGNIFICANCE STATEMENT Often, when we think of auditory spatial information, we think of where sounds are coming from-that is, the process of localization. However, this information can also be used in scene analysis, the process of grouping and segregating features of a soundwave into objects. Essentially, when sounds are further apart, they are more likely to be segregated into separate streams. Here, we provide evidence that activity in the human auditory cortex represents the spatial separation between sounds rather than their absolute locations, indicating that scene analysis and localization processes may be independent

    The Right Hemisphere Planum Temporale Supports Enhanced Visual Motion Detection Ability in Deaf People: Evidence from Cortical Thickness

    No full text
    After sensory loss, the deprived cortex can reorganize to process information from the remaining modalities, a phenomenon known as cross-modal reorganization. In blind people this cross-modal processing supports compensatory behavioural enhancements in the nondeprived modalities. Deaf people also show some compensatory visual enhancements, but a direct relationship between these abilities and cross-modally reorganized auditory cortex has only been established in an animal model, the congenitally deaf cat, and not in humans. Using T1-weighted magnetic resonance imaging, we measured cortical thickness in the planum temporale, Heschl’s gyrus and sulcus, the middle temporal area MT+, and the calcarine sulcus, in early-deaf persons. We tested for a correlation between this measure and visual motion detection thresholds, a visual function where deaf people show enhancements as compared to hearing. We found that the cortical thickness of a region in the right hemisphere planum temporale, typically an auditory region, was greater in deaf individuals with better visual motion detection thresholds. This same region has previously been implicated in functional imaging studies as important for functional reorganization. The structure-behaviour correlation observed here demonstrates this area’s involvement in compensatory vision and indicates an anatomical correlate, increased cortical thickness, of cross-modal plasticity

    Acuity of spatial stream segregation along the horizontal azimuth with non-individualized head-related transfer functions

    No full text
    Auditory spatial cues help the nervous system segregate features of a soundwave into distinct streams. To study this process, we must allot for changes in auditory spatial acuity along the horizontal azimuth, and some evidence suggests that this relationship differs for concurrent versus consecutive sounds. Here, we developed a paradigm to measure the change in spatial stream segregation along the horizontal azimuth and validate the effectiveness of non-individualized head-related transfer functions, the most easily-accessed form of auditory spatialization, in this procedure. We tested 18 normal-hearing adults using anthropometrically-matched non-individualized head-related transfer functions. We applied a spatial stream segregation task where participants identified the rhythm of a target stream presented with a spatially-separated masker. Spatial separation varied according to an adaptive staircase procedure, and thresholds were calculated both near the midline and in the far left periphery. This work will assist neuroscientists and others in the design of stimuli for eliciting brain activity related to spatial stream segregation

    Visual motion detection thresholds of the hearing and deaf groups.

    No full text
    <p>Group averages are shown with horizontal bars. Deaf people showed significantly lower motion detection thresholds than hearing people. Effect remained statistically significant when two possible outliers in the upper range of the hearing group were excluded.</p
    corecore