9,744 research outputs found

    Emotion Recognition by Using Bimodal Fusion

    Get PDF
    In order to improve the single-mode emotion recognition rate, the bimodal fusion method based on speech and facial expression was proposed. Here emotion recognition rate can be defined as ratio of number of images properly recognized to the number of input images. Single mode emotion recognition term can be used either for emotion recognition through speech or through facial expression. To increase the rate w e combine these two methods by using bimodal fusion. To do the emotion detection through facial expression we use adaptive sub layer compensation ( ASLC) based facial edge detection method and for emotion detection through speech we use well known SVM . Then bimodal emotion detection is obtained by using probability analysis

    Mandarin speech perception in combined electric and acoustic stimulation.

    Get PDF
    For deaf individuals with residual low-frequency acoustic hearing, combined use of a cochlear implant (CI) and hearing aid (HA) typically provides better speech understanding than with either device alone. Because of coarse spectral resolution, CIs do not provide fundamental frequency (F0) information that contributes to understanding of tonal languages such as Mandarin Chinese. The HA can provide good representation of F0 and, depending on the range of aided acoustic hearing, first and second formant (F1 and F2) information. In this study, Mandarin tone, vowel, and consonant recognition in quiet and noise was measured in 12 adult Mandarin-speaking bimodal listeners with the CI-only and with the CI+HA. Tone recognition was significantly better with the CI+HA in noise, but not in quiet. Vowel recognition was significantly better with the CI+HA in quiet, but not in noise. There was no significant difference in consonant recognition between the CI-only and the CI+HA in quiet or in noise. There was a wide range in bimodal benefit, with improvements often greater than 20 percentage points in some tests and conditions. The bimodal benefit was compared to CI subjects' HA-aided pure-tone average (PTA) thresholds between 250 and 2000 Hz; subjects were divided into two groups: "better" PTA (<50 dB HL) or "poorer" PTA (>50 dB HL). The bimodal benefit differed significantly between groups only for consonant recognition. The bimodal benefit for tone recognition in quiet was significantly correlated with CI experience, suggesting that bimodal CI users learn to better combine low-frequency spectro-temporal information from acoustic hearing with temporal envelope information from electric hearing. Given the small number of subjects in this study (n = 12), further research with Chinese bimodal listeners may provide more information regarding the contribution of acoustic and electric hearing to tonal language perception

    Bilaterally Combined Electric and Acoustic Hearing in Mandarin-Speaking Listeners: The Population With Poor Residual Hearing

    Get PDF
    The hearing loss criterion for cochlear implant candidacy in mainland China is extremely stringent (bilateral severe to profound hearing loss), resulting in few patients with substantial residual hearing in the nonimplanted ear. The main objective of the current study was to examine the benefit of bimodal hearing in typical Mandarin-speaking implant users who have poorer residual hearing in the nonimplanted ear relative to those used in the English-speaking studies. Seventeen Mandarinspeaking bimodal users with pure-tone averages of 80 dB HL participated in the study. Sentence recognition in quiet and in noise as well as tone and word recognition in quiet were measured in monaural and bilateral conditions. There was no significant bimodal effect for word and sentence recognition in quiet. Small bimodal effects were observed for sentence recognition in noise (6%) and tone recognition (4%). The magnitude of both effects was correlated with unaided thresholds at frequencies near voice fundamental frequencies (F0s). A weak correlation between the bimodal effect for word recognition and unaided thresholds at frequencies higher than F0s was identified. These results were consistent with previous findings that showed more robust bimodal benefits for speech recognition tasks that require higher spectral resolution than speech recognition in quiet. The significant but small F0-related bimodal benefit was also consistent with the limited acoustic hearing in the nonimplanted ear of the current subject sample, who are representative of the bimodal users in mainland China. These results advocate for a more relaxed implant candidacy criterion to be used in mainland China

    Speech recognition in reverberation in biomodal cochlear implant users

    Get PDF
    The purpose of the present study was to evaluate the effects of bimodal (implant plus hearing aid) listening on speech recognition in four different environment conditions. Results indicate that there was little difference in the cochlear implant only and bimodal conditions

    Evaluation of a wireless remote microphone in bimodal cochlear implant recipients

    Get PDF
    Objective: To evaluate the benefit of a wireless remote microphone (MM) for speech recognition in noise in bimodal adult cochlear implant (CI) users both in a test setting and in daily life. Design: This prospective study measured speech reception thresholds in noise in a repeated measures design with factors including bimodal hearing and MM use. The participants also had a 3-week trial period at home with the MM. Study sample: Thirteen post-lingually deafened adult bimodal CI users. Results: A significant improvement in SRT of 5.4 dB was found between the use of the CI with the MM and the use of the CI without the MM. By also pairing the MM to the hearing aid (HA) another improvement in SRT of 2.2 dB was found compared to the situation with the MM paired to the CI alone. In daily life, participants reported better speech perception for various challenging listening situations, when using the MM in the bimodal condition. Conclusion: There is a clear advantage of bimodal listening (CI and HA) compared to CI alone when applying advanced wireless remote microphone techniques to improve speech understanding in adult bimodal CI users

    Music-aided affective interaction between human and service robot

    Get PDF
    This study proposes a music-aided framework for affective interaction of service robots with humans. The framework consists of three systems, respectively, for perception, memory, and expression on the basis of the human brain mechanism. We propose a novel approach to identify human emotions in the perception system. The conventional approaches use speech and facial expressions as representative bimodal indicators for emotion recognition. But, our approach uses the mood of music as a supplementary indicator to more correctly determine emotions along with speech and facial expressions. For multimodal emotion recognition, we propose an effective decision criterion using records of bimodal recognition results relevant to the musical mood. The memory and expression systems also utilize musical data to provide natural and affective reactions to human emotions. For evaluation of our approach, we simulated the proposed human-robot interaction with a service robot, iRobiQ. Our perception system exhibited superior performance over the conventional approach, and most human participants noted favorable reactions toward the music-aided affective interaction.open0

    A directional remote-microphone for bimodal cochlear implant recipients

    Get PDF
    To evaluate whether speech recognition in noise differs according to whether a wireless remote microphone is connected to just the cochlear implant (CI) or to both the CI and to the hearing aid (HA) in bimodal CI users. The second aim was to evaluate the additional benefit of the directional microphone mode compared with the omnidirectional microphone mode of the wireless microphone. This prospective study measured Speech Recognition Thresholds (SRT) in babble noise in a ‘within-subjects repeated measures design’ for different listening conditions. Eighteen postlingually deafened adult bimodal CI users. No difference in speech recognition in noise in the bimodal listening condition was found between the wireless microphone connected to the CI only and to both the CI and the HA. An improvement of 4.1 dB was found for switching from the omnidirectional microphone mode to the directional mode in the CI only condition. The use of a wireless microphone improved speech recognition in noise for bimodal CI users. The use of the directional microphone mode led to a substantial additional improvement of speech perception in noise for situations with one target signal

    Bimodal Emotion Recognition using Speech and Physiological Changes

    Get PDF
    With exponentially evolving technology it is no exaggeration to say that any interface fo

    Bilateral cochlear implantation or bimodal listening in the paediatric population : retrospective analysis of decisive criteria

    Get PDF
    Introduction: In children with bilateral severe to profound hearing loss, bilateral hearing can be achieved by either bimodal stimulation (CIHA) or bilateral cochlear implantation (BICI). The aim of this study was to analyse the audiologic test protocol that is currently applied to make decisions regarding the bilateral hearing modality in the paediatric population. Methods: Pre- and postoperative audiologic test results of 21 CIHA, 19 sequential BICI and 12 simultaneous BICI children were examined retrospectively. Results: Deciding between either simultaneous BICI or unilateral implantation was mainly based on the infant's preoperative Auditory Brainstem Response thresholds. Evolution from CIHA to sequential BICI was mainly based on the audiometric test results in the contralateral (hearing aid) ear after unilateral cochlear implantation. Preoperative audiometric thresholds in the hearing aid ear were significantly better in CIHA versus sequential BICI children (p < 0.001 and p = 0.001 in unaided and aided condition, respectively). Decisive values obtained in the hearing aid ear in favour of BICI were: An average hearing threshold measured at 0.5, 1, 2 and 4 kHz of at least 93 dB HL without, and at least 52 dB HL with hearing aid together with a 40% aided speech recognition score and a 70% aided score on the phoneme discrimination subtest of the Auditory Speech Sounds Evaluation test battery. Conclusions: Although pure tone audiometry offers no information about bimodal benefit, it remains the most obvious audiometric evaluation in the decision process on the mode of bilateral stimulation in the paediatric population. A theoretical test protocol for adequate evaluation of bimodal benefit in the paediatric population is proposed

    Cross-Frequency Integration for Consonant and Vowel Identification in Bimodal Hearing

    Get PDF
    Purpose: Improved speech recognition in binaurally combined acoustic–electric stimulation (otherwise known as bimodal hearing) could arise when listeners integrate speech cues from the acoustic and electric hearing. The aims of this study were (a) to identify speech cues extracted in electric hearing and residual acoustic hearing in the low-frequency region and (b) to investigate cochlear implant (CI) users' ability to integrate speech cues across frequencies. Method: Normal-hearing (NH) and CI subjects participated in consonant and vowel identification tasks. Each subject was tested in 3 listening conditions: CI alone (vocoder speech for NH), hearing aid (HA) alone (low-pass filtered speech for NH), and both. Integration ability for each subject was evaluated using a model of optimal integration—the PreLabeling integration model (Braida, 1991). Results: Only a few CI listeners demonstrated bimodal benefit for phoneme identification in quiet. Speech cues extracted from the CI and the HA were highly redundant for consonants but were complementary for vowels. CI listeners also exhibited reduced integration ability for both consonant and vowel identification compared with their NH counterparts. Conclusion: These findings suggest that reduced bimodal benefits in CI listeners are due to insufficient complementary speech cues across ears, a decrease in integration ability, or both.National Organization for Hearing ResearchNational Institute on Deafness and Other Communication Disorders (U.S.) (Grant R03 DC009684-01)National Institute on Deafness and Other Communication Disorders (U.S.) (Grant R01 DC007152-02
    • 

    corecore