101 research outputs found

    A silent speech system based on permanent magnet articulography and direct synthesis

    Get PDF
    In this paper we present a silent speech interface (SSI) system aimed at restoring speech communication for individuals who have lost their voice due to laryngectomy or diseases affecting the vocal folds. In the proposed system, articulatory data captured from the lips and tongue using permanent magnet articulography (PMA) are converted into audible speech using a speaker-dependent transformation learned from simultaneous recordings of PMA and audio signals acquired before laryngectomy. The transformation is represented using a mixture of factor analysers, which is a generative model that allows us to efficiently model non-linear behaviour and perform dimensionality reduction at the same time. The learned transformation is then deployed during normal usage of the SSI to restore the acoustic speech signal associated with the captured PMA data. The proposed system is evaluated using objective quality measures and listening tests on two databases containing PMA and audio recordings for normal speakers. Results show that it is possible to reconstruct speech from articulator movements captured by an unobtrusive technique without an intermediate recognition step. The SSI is capable of producing speech of sufficient intelligibility and naturalness that the speaker is clearly identifiable, but problems remain in scaling up the process to function consistently for phonetically rich vocabularies

    Articulatory feature encoding and sensorimotor training for tactually supplemented speech reception by the hearing-impaired

    Get PDF
    Thesis (Ph. D.)--Harvard-MIT Division of Health Sciences and Technology, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 150-159).This thesis builds on previous efforts to develop tactile speech-reception aids for the hearing-impaired. Whereas conventional hearing aids mainly amplify acoustic signals, tactile speech aids convert acoustic information into a form perceptible via the sense of touch. By facilitating visual speechreading and providing sensory feedback for vocal control, tactile speech aids may substantially enhance speech communication abilities in the absence of useful hearing. Research for this thesis consisted of several lines of work. First, tactual detection and temporal order discrimination by congenitally deaf adults were examined, in order to assess the practicability of encoding acoustic speech information as temporal relationships among tactual stimuli. Temporal resolution among most congenitally deaf subjects was deemed adequate for reception of tactually-encoded speech cues. Tactual offset-order discrimination thresholds substantially exceeded those measured for onset-order, underscoring fundamental differences between stimulus masking dynamics in the somatosensory and auditory systems. Next, a tactual speech transduction scheme was designed with the aim of extending the amount of articulatory information conveyed by an earlier vocoder-type tactile speech display strategy. The novel transduction scheme derives relative amplitude cues from three frequency-filtered speech bands, preserving the cross-channel timing information required for consonant voicing discriminations, while retaining low-frequency modulations that distinguish voiced and aperiodic signal components. Additionally, a sensorimotor training approach ("directed babbling") was developed with the goal of facilitating tactile speech acquisition through frequent vocal imitation of visuo-tactile speech stimuli and attention to tactual feedback from one's own vocalizations. A final study evaluated the utility of the tactile speech display in resolving ambiguities among visually presented consonants, following either standard or enhanced sensorimotor training. Profoundly deaf and normal-hearing participants trained to exploit tactually-presented acoustic information in conjunction with visual speechreading to facilitate consonant identification in the absence of semantic context. Results indicate that the present transduction scheme can enhance reception of consonant manner and voicing information and facilitate identification of syllableinitial and syllable-final consonants. The sensorimotor training strategy proved selectively advantageous for subjects demonstrating more gradual tactual speech acquisition. Simple, low-cost tactile devices may prove suitable for widespread distribution in developing countries, where hearing aids and cochlear implants remain unaffordable for most severely and profoundly deaf individuals. They have the potential to enhance verbal communication with minimal need for clinical intervention.by Theodore M. Moallem.Ph.D

    Multi-Level Audio-Visual Interactions in Speech and Language Perception

    Get PDF
    That we perceive our environment as a unified scene rather than individual streams of auditory, visual, and other sensory information has recently provided motivation to move past the long-held tradition of studying these systems separately. Although they are each unique in their transduction organs, neural pathways, and cortical primary areas, the senses are ultimately merged in a meaningful way which allows us to navigate the multisensory world. Investigating how the senses are merged has become an increasingly wide field of research in recent decades, with the introduction and increased availability of neuroimaging techniques. Areas of study range from multisensory object perception to cross-modal attention, multisensory interactions, and integration. This thesis focuses on audio-visual speech perception, with special focus on facilitatory effects of visual information on auditory processing. When visual information is concordant with auditory information, it provides an advantage that is measurable in behavioral response times and evoked auditory fields (Chapter 3) and in increased entrainment to multisensory periodic stimuli reflected by steady-state responses (Chapter 4). When the audio-visual information is incongruent, the combination can often, but not always, combine to form a third, non-physically present percept (known as the McGurk effect). This effect is investigated (Chapter 5) using real word stimuli. McGurk percepts were not robustly elicited for a majority of stimulus types, but patterns of responses suggest that the physical and lexical properties of the auditory and visual stimulus may affect the likelihood of obtaining the illusion. Together, these experiments add to the growing body of knowledge that suggests that audio-visual interactions occur at multiple stages of processing

    Data and simulations about audiovisual asynchrony and predictability in speech perception

    No full text
    International audienceSince a paper by Chandrasekaran et al. (2009), an increasing number of neuroscience papers capitalize on the assumption that visual speech would be typically 150 ms ahead of auditory speech. It happens that the estimation of audiovisual asynchrony by Chandrasekaran et al. is valid only in very specific cases, for isolated CV syllables or at the beginning of a speech utterance. We present simple audiovisual data on plosive-vowel syllables (pa, ta, ka, ba, da, ga, ma, na) showing that audiovisual synchrony is actually rather precise when syllables are chained in sequences, as they are typically in most parts of a natural speech utterance. Then we discuss on the way the natural coordination between sound and image (combining cases of lead and lag of the visual input) is reflected in the so-called temporal integration window for audiovisual speech perception (van Wassenhove et al., 2007). We conclude by a computational proposal about predictive coding in such sequences, showing that the visual input may actually provide and enhance predictions even if it is quite synchronous with the auditory input

    A Framework for Speechreading Acquisition Tools

    Get PDF
    At least 360 million people worldwide have disabling hearing loss that frequently causes difficulties in day-to-day conversations. Hearing aids often fail to offer enough benefits and have low adoption rates. However, people with hearing loss find that speechreading can improve their understanding during conversation. Speechreading (often called lipreading) refers to using visual information about the movements of a speaker's lips, teeth, and tongue to help understand what they are saying. Speechreading is commonly used by people with all severities of hearing loss to understand speech, and people with typical hearing also speechread (albeit subconsciously) to help them understand others. However, speechreading is a skill that takes considerable practice to acquire. Publicly-funded speechreading classes are sometimes provided, and have been shown to improve speechreading acquisition. However, classes are only provided in a handful of countries around the world and students can only practice effectively when attending class. Existing tools have been designed to help improve speechreading acquisition, but are often not effective because they have not been designed within the context of contemporary speechreading lessons or practice. To address this, in this thesis I present a novel speechreading acquisition framework that can be used to design Speechreading Acquisition Tools (SATs) - a new type of technology to improve speechreading acquisition.Comment: PhD Thesis, supervised by Dr David R. Flatl

    A silent speech system based on permanent magnet articulography and direct synthesis

    Get PDF
    In this paper we present a silent speech interface (SSI) system aimed at restoring speech communication for individuals who have lost their voice due to laryngectomy or diseases affecting the vocal folds. In the proposed system, articulatory data captured from the lips and tongue using permanent magnet articulography (PMA) are converted into audible speech using a speaker-dependent transformation learned from simultaneous recordings of PMA and audio signals acquired before laryngectomy. The transformation is represented using a mixture of factor analysers, which is a generative model that allows us to efficiently model non-linear behaviour and perform dimensionality reduction at the same time. The learned transformation is then deployed during normal usage of the SSI to restore the acoustic speech signal associated with the captured PMA data. The proposed system is evaluated using objective quality measures and listening tests on two databases containing PMA and audio recordings for normal speakers. Results show that it is possible to reconstruct speech from articulator movements captured by an unobtrusive technique without an intermediate recognition step. The SSI is capable of producing speech of sufficient intelligibility and naturalness that the speaker is clearly identifiable, but problems remain in scaling up the process to function consistently for phonetically rich vocabularies

    Deaf Children’s Acquisition of Speech Skills: A Psycholinguistic Perspective through Intervention

    Get PDF
    This study set out to explore the nature of deaf children’s lexical representations and how these may be updated as new speech skills are acquired, through an investigation of speech processing skills and responses to intervention in three deaf children. A computer-based psycholinguistic profiling procedure was developed to examine the relationships between input skills, lexical representations and output skills for a range of consonant contrasts, with the expectation that input skills were important in determining output skills. Using this procedure, consonants or consonant clusters that were not accurately realised by the participants were classified according to responses to real word and nonword input testing in audio-visual and audio-alone conditions. By comparing how the differently classified consonants responded to intervention, the role of input skills in the updating of lexical representations was discovered to be less important than other sources of information, including phonological awareness and knowledge of orthography and grapheme-phoneme links. There was some evidence that articulatory knowledge, acquired through phonetic instruction and tactile feedback, was enriching segments of input representations so that the corresponding segments became easier to detect in input tasks. This questions the assumption that output representations depend on input representations for their specification. Further intervention involving repeated practice of new motor patterns and use of feedback from the therapist to encourage motor planning facilitated generalisation of the acquired speech skills to a wide range of speaking tasks. There was evidence that one of the participants was accessing the orthography of what he was about to say in order to generalise his speech skills and that he could eventually do this, even when conversing at an acceptable rate of speech. The implications for combining the teaching of phonics with speech production training for deaf children are discussed

    Visual Speech and Speaker Recognition

    Get PDF
    This thesis presents a learning based approach to speech recognition and person recognition from image sequences. An appearance based model of the articulators is learned from example images and is used to locate, track, and recover visual speech features. A major difficulty in model based approaches is to develop a scheme which is general enough to account for the large appearance variability of objects but which does not lack in specificity. The method described here decomposes the lip shape and the intensities in the mouth region into weighted sums of basis shapes and basis intensities, respectively, using a Karhunen-Loéve expansion. The intensities deform with the shape model to provide shape independent intensity information. This information is used in image search, which is based on a similarity measure between the model and the image. Visual speech features can be recovered from the tracking results and represent shape and intensity information. A speechreading (lip-reading) system is presented which models these features by Gaussian distributions and their temporal dependencies by hidden Markov models. The models are trained using the EM-algorithm and speech recognition is performed based on maximum posterior probability classification. It is shown that, besides speech information, the recovered model parameters also contain person dependent information and a novel method for person recognition is presented which is based on these features. Talking persons are represented by spatio-temporal models which describe the appearance of the articulators and their temporal changes during speech production. Two different topologies for speaker models are described: Gaussian mixture models and hidden Markov models. The proposed methods were evaluated for lip localisation, lip tracking, speech recognition, and speaker recognition on an isolated digit database of 12 subjects, and on a continuous digit database of 37 subjects. The techniques were found to achieve good performance for all tasks listed above. For an isolated digit recognition task, the speechreading system outperformed previously reported systems and performed slightly better than untrained human speechreaders
    • …
    corecore