175 research outputs found

    Roaring lions and chirruping lemurs: How the brain encodes sound objects in space.

    Get PDF
    The dual-stream model of auditory processing postulates separate processing streams for sound meaning and for sound location. The present review draws on evidence from human behavioral and activation studies as well as from lesion studies to argue for a position-linked representation of sound objects that is distinct both from the position-independent representation within the ventral/What stream and from the explicit sound localization processing within the dorsal/Where stream

    Optical Imaging of Interaural Time Difference Representation in Rat Auditory Cortex

    Get PDF
    We used in vivo voltage-sensitive dye optical imaging to examine the cortical representation of interaural time difference (ITD), which is believed to be involved in sound source localization. We found that acoustic stimuli with dissimilar ITD activate various localized domains in the auditory cortex. The main loci of the activation pattern shift up to 1 mm during the first 40 ms of the response period. We suppose that some of the neurons in each pool are sensitive to the definite ITD and involved in the transduction of information about sound source localization, based on the ITD. This assumption gives a reasonable fit to the Jeffress model in which the neural network calculates the ITD to define the direction of the sound source. Such calculation forms the basis for the cortex's ability to detect the azimuth of the sound source

    Inhibiting the inhibition

    Get PDF
    The precedence effect describes the phenomenon whereby echoes are spatially fused to the location of an initial sound by selectively suppressing the directional information of lagging sounds (echo suppression). Echo suppression is a prerequisite for faithful sound localization in natural environments but can break down depending on the behavioral context. To date, the neural mechanisms that suppress echo directional information without suppressing the perception of echoes themselves are not understood. We performed in vivo recordings in Mongolian gerbils of neurons of the dorsal nucleus of the lateral lemniscus (DNLL), a GABAergic brainstem nucleus that targets the auditory midbrain, and show that these DNLL neurons exhibit inhibition that persists tens of milliseconds beyond the stimulus offset, so-called persistent inhibition (PI). Using in vitro recordings, we demonstrate that PI stems from GABAergic projections from the opposite DNLL. Furthermore, these recordings show that PI is attributable to intrinsic features of this GABAergic innervation. Implementation of these physiological findings into a neuronal model of the auditory brainstem demonstrates that, on a circuit level, PI creates an enhancement of responsiveness to lagging sounds in auditory midbrain cells. Moreover, the model revealed that such response enhancement is a sufficient cue for an ideal observer to identify echoes and to exhibit echo suppression, which agrees closely with the percepts of human subjects

    Spatial processing is frequency-specific in auditory cortex but not in the midbrain

    Get PDF
    The cochlea behaves like a bank of band-pass filters, segregating information into different frequency channels. Some aspects of perception reflect processing within individual channels, but others involve the integration of information across them. One instance of this is sound localization, which improves with increasing bandwidth. The processing of binaural cues for sound location has been extensively studied. However, while the advantage conferred by bandwidth is clear we currently know little about how this additional information is combined to form our percept of space. We investigated the ability of cells in the auditory system of guinea pigs to compare interaural level differences (ILDs), a key localization cue, between tones of disparate frequencies in each ear. Cells in auditory cortex, believed to be integral to ILD processing (Excitatory from one ear, Inhibitory from the other: EI cells), separately compare ILDs over restricted frequency ranges, not consistent with their monaural tuning. In contrast, cortical EE (Excitatory from both ears) cells showed no evidence of frequency-specific processing. Both cell types are explained by a model in which ILDs are computed within separate frequency channels and subsequently combined in a single cortical cell. Interestingly, ILD processing in all inferior colliculus cell types (EE and EI) is largely consistent with processing within single matched frequency channels from each ear. Our data suggests a clear constraint on the way that localisation cues are integrated: cortical ILD tuning to broadband sounds is a composite of separate frequency-specific binaurally sensitive channels. This frequency-specific processing appears after the midbrain

    Estimasi Arah Sumber Suara Berbasis Gaussian Mixture Model

    Full text link
    Estimasi arah sumber suara menjadi topik penting yang berhubungan dengan aplikasi robot, sistem sensor dan keamanan. Variasi kondisi ekperimen dalam melakukan estimasi tersebut akan menentukan nilai akurasi. Dalam penelitian ini, variasi terhadap temperatur dan waktu pantul diambil untuk dianalisa terhadap nilai akurasi estimasi arah sumber suara. Sinyal yang digunakan adalah sinyal binaural dengan menggunakan sinyal pengganggu white noise dan human speech like (HSL) noise untuk sudut azimuth bervariasi. Estimasi dilakukan dengan menggunakan metode Gaussian Mixture Model (GMM) untuk tipe horizontal plane dan horizontal – vertical planes. Hasil eksperimen menunjukkan sudut azimuth yang dekat dengan pendengar akan menyampaikan sinyal suara lebih cepat daripada sudut yang jauh, sinyal dengan durasi waktu yang panjang yaitu 2000 milidetik akan memberikan akurasi estimasi yang lebih tinggi daripada durasi sinyal yang lebih pendek: 100, 500, dan 1000 milidetik. Selain itu, akurasi estimasi lebih tinggi untuk suara dengan white noise daripada suara dengan HSL noise. Hasil lainnya adalah estimasi memiliki performansi lebih tinggi untuk horizontal – vertical planes daripada hanya kondisi horizontal plane. Estimasi mencapai 98,6% akurasi untuk horizontal plane dan 100% akurasi untuk horizontal-vertical planes

    Attention Modulates ERP Indices of the Precedence Effect

    Get PDF
    When presented with two identical sounds from different locations separated by a short onset asynchrony, listeners report hearing a single source at the location of the lead sound, a phenomenon called the precedence effect (Wallach et al., 1949; Haas, 1951). When the onset asynchrony is above echo threshold, listeners report hearing the lead and lag sounds as separate sources with distinct locations. Event-related potential (ERP) studies have shown that perception of separate sound sources is accompanied by an object-related negativity (ORN) 100-250 ms after onset and a late posterior positivity (LP) 300-500 ms after onset (Sanders et al., 2008; Sanders et al., 2011). The current study tested whether these ERP effects are modulated by attention. Clicks were presented in lead/lag pairs at and around listeners’ echo thresholds while in separate blocks they 1) attended to the sounds and reported if they heard the lag sound as a separate source, and 2) performed a difficult 2-back visual task. Replicating previous results, when attention was directed to the sounds, an ORN and LP were observed for click pairs 1 ms above compared to 1 ms below echo threshold. In contrast, when attention was directed away from the sounds to the visual task, neither the ORN nor the LP was evident. Instead, click pairs 1 ms above echo threshold elicited an anterior positivity 250-450 ms after onset. In addition, an effect resembling an ORN was found in comparing ERPs elicited by unattended click pairs with SOAs below attended echo threshold. These results indicate that attention modulates early perceptual processes in the precedence effect and may be critical for auditory object formation under these conditions
    corecore