126 research outputs found

    Compression of Auditory Space during Forward Self-Motion

    Get PDF
    <div><h3>Background</h3><p>Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation.</p> <h3>Methodology/Principal Findings</h3><p>Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener’s physical coronal plane reached the location of one of the speakers (null point). In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point.</p> <h3>Conclusions/Significance</h3><p>These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial shifts in the auditory receptive field locations driven by afferent signals from vestibular system.</p> </div

    Intermodal attention affects the processing of the temporal alignment of audiovisual stimuli

    Get PDF
    The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory stimuli as the temporal asynchrony between the auditory and visual inputs varied across the audiovisual integration window (i.e., up to 125 ms). Randomized streams of unisensory auditory stimuli, unisensory visual stimuli, and audiovisual stimuli (consisting of the temporally proximal presentation of the visual and auditory stimulus components) were presented centrally while participants attended to either the auditory or the visual modality to detect occasional target stimuli in that modality. ERPs elicited by each of the contributing sensory modalities were extracted by signal processing techniques from the combined ERP waveforms elicited by the multisensory stimuli. This was done for each of the five different 50-ms subranges of stimulus onset asynchrony (SOA: e.g., V precedes A by 125–75 ms, by 75–25 ms, etc.). The extracted ERPs for the visual inputs of the multisensory stimuli were compared among each other and with the ERPs to the unisensory visual control stimuli, separately when attention was directed to the visual or to the auditory modality. The results showed that the attention effects on the right-hemisphere visual P1 was largest when auditory and visual stimuli were temporally aligned. In contrast, the N1 attention effect was smallest at this latency, suggesting that attention may play a role in the processing of the relative temporal alignment of the constituent parts of multisensory stimuli. At longer latencies an occipital selection negativity for the attended versus unattended visual stimuli was also observed, but this effect did not vary as a function of SOA, suggesting that by that latency a stable representation of the auditory and visual stimulus components has been established

    Auditory spatial representations of the world are compressed in blind humans

    Get PDF
    Compared to sighted listeners, blind listeners often display enhanced auditory spatial abilities such as localization in azimuth. However, less is known about whether blind humans can accurately judge distance in extrapersonal space using auditory cues alone. Using virtualization techniques, we show that auditory spatial representations of the world beyond the peripersonal space of blind listeners are compressed compared to those for normally sighted controls. Blind participants overestimated the distance to nearby sources, and underestimated the distance to remote sound sources, in both reverberant and anechoic environments, and for speech, music and noise signals. Functions relating judged and actual virtual distance were well fitted by compressive power functions, indicating that the absence of visual information regarding the distance of sound sources may prevent accurate calibration of the distance information provided by auditory signals

    Audio-Visual Speech Timing Sensitivity Is Enhanced in Cluttered Conditions

    Get PDF
    Events encoded in separate sensory modalities, such as audition and vision, can seem to be synchronous across a relatively broad range of physical timing differences. This may suggest that the precision of audio-visual timing judgments is inherently poor. Here we show that this is not necessarily true. We contrast timing sensitivity for isolated streams of audio and visual speech, and for streams of audio and visual speech accompanied by additional, temporally offset, visual speech streams. We find that the precision with which synchronous streams of audio and visual speech are identified is enhanced by the presence of additional streams of asynchronous visual speech. Our data suggest that timing perception is shaped by selective grouping processes, which can result in enhanced precision in temporally cluttered environments. The imprecision suggested by previous studies might therefore be a consequence of examining isolated pairs of audio and visual events. We argue that when an isolated pair of cross-modal events is presented, they tend to group perceptually and to seem synchronous as a consequence. We have revealed greater precision by providing multiple visual signals, possibly allowing a single auditory speech stream to group selectively with the most synchronous visual candidate. The grouping processes we have identified might be important in daily life, such as when we attempt to follow a conversation in a crowded room

    Multilab EcoFAB study shows highly reproducible physiology and depletion of soil metabolites by a model grass

    Get PDF
    There is a dynamic reciprocity between plants and their environment: soil physiochemical properties influence plant morphology and metabolism, and root morphology and exudates shape the environment surrounding roots. Here, we investigate the reproducibility of plant trait changes in response to three growth environments. We utilized fabricated ecosystem (EcoFAB) devices to grow the model grass Brachypodium distachyon in three distinct media across four laboratories: phosphate-sufficient and -deficient mineral media allowed assessment of the effects of phosphate starvation, and a complex, sterile soil extract represented a more natural environment with yet uncharacterized effects on plant growth and metabolism. Tissue weight and phosphate content, total root length, and root tissue and exudate metabolic profiles were consistent across laboratories and distinct between experimental treatments. Plants grown in soil extract were morphologically and metabolically distinct, with root hairs four times longer than with other growth conditions. Further, plants depleted half of the metabolites investigated from the soil extract. To interact with their environment, plants not only adapt morphology and release complex metabolite mixtures, but also selectively deplete a range of soil-derived metabolites. The EcoFABs utilized here generated high interlaboratory reproducibility, demonstrating their value in standardized investigations of plant traits

    The Nature of Working Memory for Braille

    Get PDF
    Blind individuals have been shown on multiple occasions to compensate for their loss of sight by developing exceptional abilities in their remaining senses. While most research has been focused on perceptual abilities per se in the auditory and tactile modalities, recent work has also investigated higher-order processes involving memory and language functions. Here we examined tactile working memory for Braille in two groups of visually challenged individuals (completely blind subjects, CBS; blind with residual vision, BRV). In a first experimental procedure both groups were given a Braille tactile memory span task with and without articulatory suppression, while the BRV and a sighted group performed a visual version of the task. It was shown that the Braille tactile working memory (BrWM) of CBS individuals under articulatory suppression is as efficient as that of sighted individuals' visual working memory in the same condition. Moreover, the results suggest that BrWM may be more robust in the CBS than in the BRV subjects, thus pointing to the potential role of visual experience in shaping tactile working memory. A second experiment designed to assess the nature (spatial vs. verbal) of this working memory was then carried out with two new CBS and BRV groups having to perform the Braille task concurrently with a mental arithmetic task or a mental displacement of blocks task. We show that the disruption of memory was greatest when concurrently carrying out the mental displacement of blocks, indicating that the Braille tactile subsystem of working memory is likely spatial in nature in CBS. The results also point to the multimodal nature of working memory and show how experience can shape the development of its subcomponents

    Efficient Visual Search from Synchronized Auditory Signals Requires Transient Audiovisual Events

    Get PDF
    BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony) of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps) we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time

    Population genomics of Drosophila suzukii reveal longitudinal population structure and signals of migrations in and out of the continental United States

    Get PDF
    Drosophila suzukii, or spotted-wing drosophila, is now an established pest in many parts of the world, causing significant damage to numerous fruit crop industries. Native to East Asia, D. suzukii infestations started in the United States (U.S.) a decade ago, occupying a wide range of climates. To better understand invasion ecology of this pest, knowledge of past migration events, population structure, and genetic diversity is needed. In this study, we sequenced whole genomes of 237 individual flies collected across the continental U.S., as well as several sites in Europe, Brazil, and Asia, to identify and analyze hundreds of thousands of genetic markers. We observed strong population structure between Western and Eastern U.S. populations, but no evidence of any population structure between different latitudes within the continental U.S., suggesting there is no broad-scale adaptations occurring in response to differences in winter climates. We detect admixture from Hawaii to the Western U.S. and from the Eastern U.S. to Europe, in agreement with previously identified introduction routes inferred from microsatellite analysis. We also detect potential signals of admixture from the Western U.S. back to Asia, which could have important implications for shipping and quarantine policies for exported agriculture. We anticipate this large genomic dataset will spur future research into the genomic adaptations underlying D. suzukii pest activity and development of novel control methods for this agricultural pes

    Effect of Audiovisual Training on Monaural Spatial Hearing in Horizontal Plane

    Get PDF
    The article aims to test the hypothesis that audiovisual integration can improve spatial hearing in monaural conditions when interaural difference cues are not available. We trained one group of subjects with an audiovisual task, where a flash was presented in parallel with the sound and another group in an auditory task, where only sound from different spatial locations was presented. To check whether the observed audiovisual effect was similar to feedback, the third group was trained using the visual feedback paradigm. Training sessions were administered once per day, for 5 days. The performance level in each group was compared for auditory only stimulation on the first and the last day of practice. Improvement after audiovisual training was several times higher than after auditory practice. The group trained with visual feedback demonstrated a different effect of training with the improvement smaller than the group with audiovisual training. We conclude that cross-modal facilitation is highly important to improve spatial hearing in monaural conditions and may be applied to the rehabilitation of patients with unilateral deafness and after unilateral cochlear implantation
    corecore