13 research outputs found

    Individualized Frequency Importance Functions for Listeners with Sensorineural Hearing Loss

    Get PDF
    The Speech Intelligibility Index includes a series of frequency importance functions for calculating the estimated intelligibility of speech under various conditions. Until recently, techniques to derive frequency importance required averaging data over a group of listeners, thus hindering the ability to observe individual differences due to factors such as hearing loss. In the current study, the “random combination strategy” [Bosen and Chatterjee (2016). J. Acoust. Soc. Am. 140, 3718–3727] was used to derive frequency importance functions for individual hearing-impaired listeners, and normal-hearing participants for comparison. Functions were measured by filtering sentences to contain only random subsets of frequency bands on each trial, and regressing speech recognition against the presence or absence of bands across trials. Results show that the contribution of each band to speech recognition was inversely proportional to audiometric threshold in that frequency region, likely due to reduced audibility, even though stimuli were shaped to compensate for each individual\u27s hearing loss. The results presented in this paper demonstrate that this method is sensitive to factors that alter the shape of frequency importance functions within individuals with hearing loss, which could be used to characterize the impact of audibility or other factors related to suprathreshold deficits or hearing aid processing strategies

    FORUM:Remote testing for psychological and physiological acoustics

    Get PDF
    Acoustics research involving human participants typically takes place in specialized laboratory settings. Listening studies, for example, may present controlled sounds using calibrated transducers in sound-attenuating or anechoic chambers. In contrast, remote testing takes place outside of the laboratory in everyday settings (e.g., participants' homes). Remote testing could provide greater access to participants, larger sample sizes, and opportunities to characterize performance in typical listening environments at the cost of reduced control of environmental conditions, less precise calibration, and inconsistency in attentional state and/or response behaviors from relatively smaller sample sizes and unintuitive experimental tasks. The Acoustical Society of America Technical Committee on Psychological and Physiological Acoustics launched the Task Force on Remote Testing (https://tcppasa.org/remotetesting/) in May 2020 with goals of surveying approaches and platforms available to support remote testing and identifying challenges and considerations for prospective investigators. The results of this task force survey were made available online in the form of a set of Wiki pages and summarized in this report. This report outlines the state-of-the-art of remote testing in auditory-related research as of August 2021, which is based on the Wiki and a literature search of papers published in this area since 2020, and provides three case studies to demonstrate feasibility during practice

    Example pre- and post-disparity auditory localization.

    No full text
    <p><b>left panel</b>, Localization responses from one subject before and after exposure to leftward (visual −8° azimuth relative to auditory) AV disparity. Blue and red circles represent localization response error as a function of target location in the pre-disparity and post-disparity blocks, respectively, and lines of the same color represent a linear fit to the data from each block. <b>right panel</b>, Shift in the same subject from exposure to rightward (visual +8° azimuth relative to auditory) disparity.</p

    Multiple time scales of the ventriloquism aftereffect

    No full text
    <div><p>The ventriloquism aftereffect (VAE) refers to a shift in auditory spatial perception following exposure to a spatial disparity between auditory and visual stimuli. The VAE has been previously measured on two distinct time scales. Hundreds or thousands of exposures to a an audio-visual spatial disparity produces enduring VAE that persists after exposure ceases. Exposure to a single audio-visual spatial disparity produces immediate VAE that decays over seconds. To determine if these phenomena are two extremes of a continuum or represent distinct processes, we conducted an experiment with normal hearing listeners that measured VAE in response to a repeated, constant audio-visual disparity sequence, both immediately after exposure to each audio-visual disparity and after the end of the sequence. In each experimental session, subjects were exposed to sequences of auditory and visual targets that were constantly offset by +8° or −8° in azimuth from one another, then localized auditory targets presented in isolation following each sequence. Eye position was controlled throughout the experiment, to avoid the effects of gaze on auditory localization. In contrast to other studies that did not control eye position, we found both a large shift in auditory perception that decayed rapidly after each AV disparity exposure, along with a gradual shift in auditory perception that grew over time and persisted after exposure to the AV disparity ceased. We modeled the temporal and spatial properties of the measured auditory shifts using grey box nonlinear system identification, and found that two models could explain the data equally well. In the power model, the temporal decay of the ventriloquism aftereffect was modeled with a power law relationship. This causes an initial rapid drop in auditory shift, followed by a long tail which accumulates with repeated exposure to audio-visual disparity. In the double exponential model, two separate processes were required to explain the data, one which accumulated and decayed exponentially and the other which slowly integrated over time. Both models fit the data best when the spatial spread of the ventriloquism aftereffect was limited to a window around the location of the audio-visual disparity. We directly compare the predictions made by each model, and suggest additional measurements that could help distinguish which model best describes the mechanisms underlying the VAE.</p></div

    Model best fit parameters.

    No full text
    <p>Values indicate population median, and were used to produce the model simulations in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0200930#pone.0200930.g005" target="_blank">Fig 5B</a>.</p

    Evolution of auditory shift with repeated exposure to fixed AV disparity, averaged across subjects.

    No full text
    <p>Blue and red points represent average relative error for pre-disparity and post-disparity, respectively. Black and gray points represent average relative error while localizing auditory targets during the exposure block, with black indicating localization of auditory targets presented 1–3 seconds after and 0° away from the auditory component of the AV disparity (“same location” targets), and gray indicating localization of targets presented more than 8 seconds after and at a different location than the auditory component of the AV disparity (“different location” targets). Blue and red lines indicate mean average relative error in the pre and post disparity blocks (with auditory shift defined relative to the pre-disparity localization responses for each session), gray lines show linear best fits to responses in the exposure block, and shaded regions represent ±1 standard deviation across subjects for each sample of the corresponding color.</p

    Model of the ventriloquism aftereffect.

    No full text
    <p><b>A</b>, Block diagram summarizing the state-space model, along with the double exponential and power models of the ventriloquism aftereffect. <b>B</b>, Example model responses to AV disparity trains. The top panels show the double exponential model, and the bottom panels show the power model. The left panels show how auditory shift builds and decays to a single AV disparity train (each dot represents a single repetition of the AV disparity). When auditory targets are presented following an AV disparity train, auditory shift influences the encoded location of the target (points and trend lines, colored to match <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0200930#pone.0200930.g003" target="_blank">Fig 3</a>). Additionally, visual capture (not measured in the present experiment) occurs within the first AV disparity and can be held in memory indefinitely, as shown in [<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0200930#pone.0200930.ref016" target="_blank">16</a>]. The right panels show how both models can replicate the trends in auditory shift produced by repeated exposure to a fixed AV disparity observed in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0200930#pone.0200930.g003" target="_blank">Fig 3</a>. In both panels, median best fit model parameters across participants were used to produce the trends shown, to allow for visual comparison of model predictions. The effect of changing spatial location was not shown here to emphasize the changes in temporal relationship across the two models, so the difference in auditory shift across trial types appears smaller than in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0200930#pone.0200930.g003" target="_blank">Fig 3</a>.</p

    Model fit comparison.

    No full text
    <p>Median and sum differences in Akaike’s Corrected Information Criteria (AICc) between each model and the best fitting model (power with spatial window) are reported, sorted by median difference.</p

    Pre- and post-disparity comparison of auditory bias and gain.

    No full text
    <p><b>left panel</b>, Difference in bias between the pre-disparity and post-disparity blocks. Data from leftward and rightward disparity sessions are represented with leftward and rightward pointing arrows, respectively. The two hollow symbols represent one subject that was an apparent outlier, and was excluded from the line fit. Data from leftward disparity sessions was mirrored through the origin to overlay rightward data. The dashed gray line represents where responses would fall if subjects completely compensated for encoded disparity, the solid gray line indicates where responses would fall if no change in bias was observed, and the solid dark line is a linear best fit. <b>right panel</b>, Difference in spatial gain between the pre-disparity and post-disparity blocks.</p
    corecore