58 research outputs found

    Effects of audibility and multichannel wide dynamic range compression on consonant recognition for listeners with severe hearing loss

    Get PDF
    Objective—This study examined the effects of multichannel wide-dynamic range compression (WDRC) amplification and stimulus audibility on consonant recognition and error patterns. Design—Listeners had either severe or mild-to-moderate sensorineural hearing loss. Each listener was monaurally fit with a wearable hearing aid using typical clinical procedures, frequency-gain parameters and a hybrid of clinically prescribed compression ratios for DSL (Scollie et al., 2005) and NAL-NL (Dillon, 1999). Consonant-vowel nonsense syllables were presented in soundfield at multiple input levels (50, 65, 80 dB SPL). Test conditions were four-channel fast-acting WDRC amplification and a control compression limiting (CL) amplification condition. Listeners identified the stimulus heard from choices presented on an on-screen display. A between-subject repeated measures design was used to evaluate consonant recognition and consonant confusion patterns. Results—Fast-acting WDRC provided a considerable audibility advantage at 50 dB SPL, especially for listeners with severe hearing loss. Listeners with mild-to-moderate hearing loss received less audibility improvement from the fast-acting WDRC amplification, for conversational and high level speech, compared to listeners with severe hearing loss. Analysis of WDRC benefit scores revealed that listeners had slightly lower scores with fast-acting WDRC amplification (relative to CL) when WDRC provided minimal improvement in audibility. The negative effect was greater for listeners with mild-to-moderate hearing loss compared to their counterparts with severe hearing loss. Conclusions—All listeners, but particularly the severe loss group, benefited from fast-acting WDRC amplification for low-level speech. For conversational and higher speech levels (i.e., when WDRC does not confer a significant audibility advantage), fast-acting WDRC amplification appears to slightly degrade performance. Listeners’ consonant confusion patterns suggest that this negative effect may be partly due to fast-acting WDRC-induced distortions which alter specific consonant features. In support of this view, audibility accounted for a greater percentage of the variance in listeners’ performance with CL amplification compared to fast-acting WDRC amplification

    Evidence for cue-independent spatial representation in the human auditory cortex during active listening

    Get PDF
    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multi-voxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.Peer reviewe

    Location Coding by Opponent Neural Populations in the Auditory Cortex

    Get PDF
    Although the auditory cortex plays a necessary role in sound localization, physiological investigations in the cortex reveal inhomogeneous sampling of auditory space that is difficult to reconcile with localization behavior under the assumption of local spatial coding. Most neurons respond maximally to sounds located far to the left or right side, with few neurons tuned to the frontal midline. Paradoxically, psychophysical studies show optimal spatial acuity across the frontal midline. In this paper, we revisit the problem of inhomogeneous spatial sampling in three fields of cat auditory cortex. In each field, we confirm that neural responses tend to be greatest for lateral positions, but show the greatest modulation for near-midline source locations. Moreover, identification of source locations based on cortical responses shows sharp discrimination of left from right but relatively inaccurate discrimination of locations within each half of space. Motivated by these findings, we explore an opponent-process theory in which sound-source locations are represented by differences in the activity of two broadly tuned channels formed by contra- and ipsilaterally preferring neurons. Finally, we demonstrate a simple model, based on spike-count differences across cortical populations, that provides bias-free, level-invariant localization—and thus also a solution to the “binding problem” of associating spatial information with other nonspatial attributes of sounds

    Distributed coding of sound locations in the auditory cortex

    Full text link
    Although the auditory cortex plays an important role in sound localization, that role is not well understood. In this paper, we examine the nature of spatial representation within the auditory cortex, focusing on three questions. First, are sound-source locations encoded by individual sharply tuned neurons or by activity distributed across larger neuronal populations? Second, do temporal features of neural responses carry information about sound-source location? Third, are any fields of the auditory cortex specialized for spatial processing? We present a brief review of recent work relevant to these questions along with the results of our investigations of spatial sensitivity in cat auditory cortex. Together, they strongly suggest that space is represented in a distributed manner, that response timing (notably first-spike latency) is a critical information-bearing feature of cortical responses, and that neurons in various cortical fields differ in both their degree of spatial sensitivity and their manner of spatial coding. The posterior auditory field (PAF), in particular, is well suited for the distributed coding of space and encodes sound-source locations partly by modulations of response latency. Studies of neurons recorded simultaneously from PAF and/or A1 reveal that spatial information can be decoded from the relative spike times of pairs of neurons – particularly when responses are compared between the two fields – thus partially compensating for the absence of an absolute reference to stimulus onset.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/47436/1/422_2003_Article_439.pd

    Functional Properties of Human Auditory Cortical Fields

    Get PDF
    While auditory cortex in non-human primates has been subdivided into multiple functionally specialized auditory cortical fields (ACFs), the boundaries and functional specialization of human ACFs have not been defined. In the current study, we evaluated whether a widely accepted primate model of auditory cortex could explain regional tuning properties of fMRI activations on the cortical surface to attended and non-attended tones of different frequency, location, and intensity. The limits of auditory cortex were defined by voxels that showed significant activations to non-attended sounds. Three centrally located fields with mirror-symmetric tonotopic organization were identified and assigned to the three core fields of the primate model while surrounding activations were assigned to belt fields following procedures similar to those used in macaque fMRI studies. The functional properties of core, medial belt, and lateral belt field groups were then analyzed. Field groups were distinguished by tonotopic organization, frequency selectivity, intensity sensitivity, contralaterality, binaural enhancement, attentional modulation, and hemispheric asymmetry. In general, core fields showed greater sensitivity to sound properties than did belt fields, while belt fields showed greater attentional modulation than core fields. Significant distinctions in intensity sensitivity and contralaterality were seen between adjacent core fields A1 and R, while multiple differences in tuning properties were evident at boundaries between adjacent core and belt fields. The reliable differences in functional properties between fields and field groups suggest that the basic primate pattern of auditory cortex organization is preserved in humans. A comparison of the sizes of functionally defined ACFs in humans and macaques reveals a significant relative expansion in human lateral belt fields implicated in the processing of speech

    FORUM:Remote testing for psychological and physiological acoustics

    Get PDF
    Acoustics research involving human participants typically takes place in specialized laboratory settings. Listening studies, for example, may present controlled sounds using calibrated transducers in sound-attenuating or anechoic chambers. In contrast, remote testing takes place outside of the laboratory in everyday settings (e.g., participants' homes). Remote testing could provide greater access to participants, larger sample sizes, and opportunities to characterize performance in typical listening environments at the cost of reduced control of environmental conditions, less precise calibration, and inconsistency in attentional state and/or response behaviors from relatively smaller sample sizes and unintuitive experimental tasks. The Acoustical Society of America Technical Committee on Psychological and Physiological Acoustics launched the Task Force on Remote Testing (https://tcppasa.org/remotetesting/) in May 2020 with goals of surveying approaches and platforms available to support remote testing and identifying challenges and considerations for prospective investigators. The results of this task force survey were made available online in the form of a set of Wiki pages and summarized in this report. This report outlines the state-of-the-art of remote testing in auditory-related research as of August 2021, which is based on the Wiki and a literature search of papers published in this area since 2020, and provides three case studies to demonstrate feasibility during practice
    corecore