11,329 research outputs found

    Efficacy in noise of the Starkey Surflink Mobile 2 technology in directional versus omnidirectional microphone mode with experienced adult hearing aid users

    Get PDF
    The Starkey SurfLink Mobile 2 is a remote microphone accessory. Starkey claims that by placing the SurfLink’s internal microphone in the directional microphone setting, the participant will hear better in noise over the omnidirectional setting. This study aims to test the thisthe claim about the devic

    Comparison of a Head-Mounted Display and a Curved Screen in a Multi-Talker Audiovisual Listening Task

    Full text link
    Virtual audiovisual technology has matured and its use in research is widely considered. However, the technology has yet to be established for speech- and audio-related perception research. This study examined the effects of different audiovisual conditions on head yaw and gaze direction when listening to multi-talker conversations. Two immersive displays were tested, a curved screen (CS) and a head-mounted display (HMD), combined with three visual conditions (audio-only, virtual characters and video recordings). Three groups of participants were tested: seventeen young normal-hearing, eleven older normal-hearing and ten older hearing-impaired with hearing aids. Results showed that, when visual cues were not present, the participants tended to look ahead. When visual information was available, they looked at the target speaker. Significant differences between displays and visual conditions were found, pointing out that using different audiovisual setups might lead to slightly different head yaw and gaze direction. No significant differences were found between groups. An open interview showed that the CS was preferred over the HMD and that the video recordings were the favorite visual condition.Comment: 9 pages, 4 figure

    Effects of simultaneous speech and sign on infants’ attention to spoken language

    Get PDF
    Objectives: To examine the hypothesis that infants receiving a degraded auditory signal have more difficulty segmenting words from fluent speech if familiarized with the words presented in both speech and sign compared to familiarization with the words presented in speech only. Study Design: Experiment utilizing an infant-controlled visual preference procedure. Methods: Twenty 8.5-month-old normal-hearing infants completed testing. Infants were familiarized with repetitions of words in either the speech + sign (n = 10) or the speech only (n = 10) condition. Results: Infants were then presented with four six-sentence passages using an infant-controlled visual preference procedure. Every sentence in two of the passages contained the words presented in the familiarization phase, whereas none of the sentences in the other two passages contained familiar words.Infants exposed to the speech + sign condition looked at familiar word passages for 15.3 seconds and at nonfamiliar word passages for 15.6 seconds, t (9) = -0.130, p = .45. Infants exposed to the speech only condition looked at familiar word passages for 20.9 seconds and to nonfamiliar word passages for 15.9 seconds. This difference was statistically significant, t (9) = 2.076, p = .03. Conclusions: Infants\u27 ability to segment words from degraded speech is negatively affected when these words are initially presented in simultaneous speech and sign. The current study suggests that a decreased ability to segment words from fluent speech may contribute towards the poorer performance of pediatric cochlear implant recipients in total communication settings on a wide range of spoken language outcome measures
    • …
    corecore