101 research outputs found

    The impact of spectrally asynchronous delay on the intelligibility of conversational speech

    Get PDF
    Conversationally spoken speech is rampant with rapidly changing and complex acoustic cues that individuals are able to hear, process, and encode to meaning. For many hearing-impaired listeners, a hearing aid is necessary to hear these spectral and temporal acoustic cues of speech. For listeners with mild-moderate high frequency sensorineural hearing loss, open-fit digital signal processing (DSP) hearing aids are the most common amplification option. Open-fit DSP hearing aids introduce a spectrally asynchronous delay to the acoustic signal by allowing audible low frequency information to pass to the eardrum unimpeded while the aid delivers amplified high frequency sounds to the eardrum that has a delayed onset relative to the natural pathway of sound. These spectrally asynchronous delays may disrupt the natural acoustic pattern of speech. The primary goal of this study is to measure the effect of spectrally asynchronous delay on the intelligibility of conversational speech by normal-hearing and hearing-impaired listeners. A group of normal-hearing listeners (n = 25) and listeners with mild-moderate high frequency sensorineural hearing loss (n = 25) participated in this study. The acoustic stimuli included 200 conversationally-spoken recordings of the low predictability sentences from the revised speech perception in noise test (r-SPIN). These 200 sentences were modified to control for audibility for the hearing-impaired group and so that the acoustic energy above 2 kHz was delayed by either 0 ms (control), 4ms, 8ms, or 32 ms relative to the low frequency energy. The data were analyzed in order to find the effect of each of the four delay conditions on the intelligibility of the final key word of each sentence. Normal-hearing listeners were minimally affected by the asynchronous delay. However, the hearing-impaired listeners were deleteriously affected by increasing amounts of spectrally asynchronous delay. Although the hearing-impaired listeners performed well overall in their perception of conversationally spoken speech in quiet, the intelligibility of conversationally spoken sentences significantly decreased when the delay values were equal to or greater than 4 ms. Therefore, hearing aid manufacturers need to restrict the amount of delay introduced by DSP so that it does not distort the acoustic patterns of conversational speech

    Pitch perception in musical chords for cochlear implant users

    Get PDF
    Many people with severe or profound hearing loss are able to benefit from electronic hearing provided by a cochlear implant (CI); however, perception of music is often reported to be unsatisfactory. Due to the sound processing restrictions and current spread, CI users do not always perceive accurate pitch information, which adversely affects their ability to perceive and enjoy music. This thesis examines the factors affecting pitch perception in musical contexts for CI recipients. A questionnaire study was carried out in order to pilot and validate a questionnaire about music listening experience and enjoyment for bot pre- and post-lingually deafened CI users. Results of this study were generally more positive that previous questionnaire studies, especially from pre-lingually deafened CI users, but the majority of respondents were keen for an improvement to their music listening experience. CI users took part in a pilot study of the Chord Discrimination Test, identifying the “odd one out” of three different chord stimuli in which the difference was one semitone. The individual notes of the chords were presented either simultaneously or sequentially and spanned one to three octaves. Results showed significantly higher discrimination scores for simultaneously presented chords, possibly due to auditory memory difficulties for the sequential task.In the main study phase, participants undertook the tests with stimuli comprising both pure tones and simulated piano tones, and chord differences ranging from one to three semitones. No significant difference between the two tone conditions was found, but performance was significantly better when the difference between the chords was three semitones. A change in the top note of the chord was easier to detect than a change in the middle note. Peak performance occurred in the C5 octave range, which also correlated with scores on a consonant recognition test, suggesting a relationship between speech and music perception in this frequency area. Children took part in an abridged version of the Chord Discrimination Test. Children with normal hearing were able to identify a one semitone difference between musical chords, while hearing impaired children performaed at chance. Some children were also able to accurately identify a half semitone difference. NH children’s results showed an effect whereby performance fell when the notes of the chord remained within the C major scale, suggesting a potential for the Chord Discrimination Test to be used in assessments of sensitivity to musical scales. The Chord Discrimination Test was shown to be a versatile and adaptable tool with many potential applications for use in settings such as musical training, and pitch perception assessments in both research and clinical settings

    On the applicability of models for outdoor sound (A)

    Get PDF

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning

    Ultrasonic splitting of oil-in-water emulsions

    Get PDF

    Predicting room acoustical behavior with the ODEON computer model

    Get PDF

    Comprehension in-situ: how multimodal information shapes language processing

    Get PDF
    The human brain supports communication in dynamic face-to-face environments where spoken words are embedded in linguistic discourse and accompanied by multimodal cues, such as prosody, gestures and mouth movements. However, we only have limited knowledge of how these multimodal cues jointly modulate language comprehension. In a series of behavioural and EEG studies, we investigated the joint impact of these cues when processing naturalistic-style materials. First, we built a mouth informativeness corpus of English words, to quantify mouth informativeness of a large number of words used in the following experiments. Then, across two EEG studies, we found and replicated that native English speakers use multimodal cues and that their interactions dynamically modulate N400 amplitude elicited by words that are less predictable in the discourse context (indexed by surprisal values per word). We then extended the findings to second language comprehenders, finding that multimodal cues modulate L2 comprehension, just like in L1, but to a lesser extent; although L2 comprehenders benefit more from meaningful gestures and mouth movements. Finally, in two behavioural experiments investigating whether multimodal cues jointly modulate the learning of new concepts, we found some evidence that presence of iconic gestures improves memory, and that the effect may be larger if information is presented also with prosodic accentuation. Overall, these findings suggest that real-world comprehension uses all cues present and weights cues differently in a dynamic manner. Therefore, multimodal cues should not be neglected for language studies. Investigating communication in naturalistic contexts containing more than one cue can provide new insight into our understanding of language comprehension in the real world
    • …
    corecore