18,840 research outputs found

    Color Display System for Connected Speech to be Used for the Hearing Impaired

    Get PDF
    A color display system for the hearing impaired which converts connected speech signals into color pictures on a TV screen has been developed. This paper describes the principle, the function and the performance of the system, and the visual images of the patterns displayed using the system. The system consists of a real-time formant tracker, a pitch detector, a memory system, a color coder, etc. In this system, the lowest three formant frequencies are extracted from voiced signals of connected speech by means of the formant tracker, and are converted to three primary color signals. The three primary color signals as a time pattern are represented as a spatial color pattern on the TV screen using the memory system and the color coder. In unvoiced portions, colorless and dapple patterns can be seen. The reproduced Pattern using this system is not only beautiful, but also easy to understand intuitively. Especially, the visual experiments show that simultaneous contrast effect of colors caused by the spatial representation visually compensates for coarticulation effect on connected vowels

    Whole Word Phonetic Displays for Speech Articulation Training

    Get PDF
    The main objective of this dissertation is to investigate and develop speech recognition technologies for speech training for people with hearing impairments. During the course of this work, a computer aided speech training system for articulation speech training was also designed and implemented. The speech training system places emphasis on displays to improve children\u27s pronunciation of isolated Consonant-Vowel-Consonant (CVC) words, with displays at both the phonetic level and whole word level. This dissertation presents two hybrid methods for combining Hidden Markov Models (HMMs) and Neural Networks (NNs) for speech recognition. The first method uses NN outputs as posterior probability estimators for HMMs. The second method uses NNs to transform the original speech features to normalized features with reduced correlation. Based on experimental testing, both of the hybrid methods give higher accuracy than standard HMM methods. The second method, using the NN to create normalized features, outperforms the first method in terms of accuracy. Several graphical displays were developed to provide real time visual feedback to users, to help them to improve and correct their pronunciations

    The Electronic interpreter for the deaf

    Get PDF
    None provided

    Communication devices for the hearing impaired

    Get PDF
    None provided

    Bridge the Gap Between Deaf and Normal

    Get PDF
    Communication is the exchange of ideas between individuals. Humans communicate verbally, in writing, or visually. In everyday life, most people communicate verbally. This is because it is the easiest and most efficient method of communication. Although the hearing people communicate verbally, the deaf cannot exchange their thoughts in the same manner. Therefore, people with hearing impairments use the british sign language to communicate with each other. However, it is difficult for the deaf to communicate like the hearing people. In addition, they find it challenging to perpetrate the activities performed by the hearing people in their daily life. Thus, this paper proposes a system that not only enables the deaf to communicate with the hearing people but also performs their daily life activities. The proposed system is able to convert a text in a picture or video into a british sign language and a video in british sign language into standard text. In addition, hearing people can use it to learn the british sign language method

    Anomalous morphology in left hemisphere motor and premotor cortex of children who stutter

    Full text link
    Stuttering is a neurodevelopmental disorder that affects the smooth flow of speech production. Stuttering onset occurs during a dynamic period of development when children first start learning to formulate sentences. Although most children grow out of stuttering naturally, ∼1% of all children develop persistent stuttering that can lead to significant psychosocial consequences throughout one’s life. To date, few studies have examined neural bases of stuttering in children who stutter, and even fewer have examined the basis for natural recovery versus persistence of stuttering. Here we report the first study to conduct surface-based analysis of the brain morphometric measures in children who stutter. We used FreeSurfer to extract cortical size and shape measures from structural MRI scans collected from the initial year of a longitudinal study involving 70 children (36 stuttering, 34 controls) in the 3–10-year range. The stuttering group was further divided into two groups: persistent and recovered, based on their later longitudinal visits that allowed determination of their eventual clinical outcome. A region of interest analysis that focused on the left hemisphere speech network and a whole-brain exploratory analysis were conducted to examine group differences and group × age interaction effects. We found that the persistent group could be differentiated from the control and recovered groups by reduced cortical thickness in left motor and lateral premotor cortical regions. The recovered group showed an age-related decrease in local gyrification in the left medial premotor cortex (supplementary motor area and and pre-supplementary motor area). These results provide strong evidence of a primary deficit in the left hemisphere speech network, specifically involving lateral premotor cortex and primary motor cortex, in persistent developmental stuttering. Results further point to a possible compensatory mechanism involving left medial premotor cortex in those who recover from childhood stuttering.This study was supported by Award Numbers R01DC011277 (SC) and R01DC007683 (FG) from the National Institute on Deafness and other Communication Disorders (NIDCD). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIDCD or the National Institutes of Health. (R01DC011277 - National Institute on Deafness and other Communication Disorders (NIDCD); R01DC007683 - National Institute on Deafness and other Communication Disorders (NIDCD))Accepted manuscrip

    A Vowel Analysis of the Northwestern University-Children\u27s Perception of Speech Evaluation Tool

    Get PDF
    In an analysis of the speech perception evaluation tool, the Northwestern University – Children’s Perception of Speech test, the goal was to determine whether the foil words and the target word were phonemically balanced across each page of test Book A, as it corresponds to the target words presented in Test Form 1 and Test Form 2 independently. Based on vowel sounds alone, variation exists in the vowels that appear on a test page on the majority of pages. The corresponding formant frequencies, at all three resonance levels for both the average adult male speaker and the average adult female speaker, revealed that the target word could be easily distinguished from the foil words on the premise of percent differences calculated between the formants of the target vowel and the foil vowels. For the population of children with hearing impairments, especially those with limited or no access to the high frequencies, the NU-CHIPS evaluation tool may not be the best indicator of the child’s speech perception ability due to significant vowel variations

    Hand Gesture Recognition System Using Histogram and Neural Network

    Get PDF
    In this paper, consider the problem facing by distance between hand and the web cam and corresponding image noise in a Hand gesture recognition for human computer interaction (HCI) using a web cam.In this paper a survey of various recent hand gesture recognition systems background information is presented, along with key issues and major challenges of hand gesture recognition system are presented. In this paper consider histogram and neural network approaches for hand detection. At the end of this paper focus on different hand gesture approaches, algorithm, prototype model, technologies and its applications. The present approaches can be mainly divided into Data-Glove Based, Computer Vision Based approach and Drawing gesture. Hand gesture is a method of non-verbal communication for human beings. Using gesture applications human can interact with computer efficiently without any input devices. DOI: 10.17762/ijritcc2321-8169.160413
    corecore