14 research outputs found

    Predicting Tongue Positions from Acoustics and Facial Features

    Get PDF
    International audienceWe test the hypothesis that adding information regarding the positions of electromagnetic articulograph (EMA) sensors on the lips and jaw can improve the results of a typical acoustic-to-EMA mapping system, based on support vector regression, that targets the tongue sensors. Our initial motivation is to use such a system in the context of adding a tongue animation to a talking head built on the basis of concatenating bimodal acoustic-visual units. For completeness, we also train a system that maps only jaw and lip information to tongue information

    Adapting visual data to a linear articulatory model

    Get PDF
    The goal of this work is to investigate audiovisual-to-articulatory inversion. It is well established that acoustic-to-articulatory inversion is an underdetermined problem. On the other hand, there is strong evidence that human speakers/listeners exploit the multimodality of speech, and more particularly the articulatory cues: the view of visible articulators, i.e. jaw and lips, improves speech intelligibility. It is thus interesting to add constraints provided by the direct visual observation of the speaker's face. Visible data was obtained by stereo-vision and enable the 3D recovery of jaw and lip movements. These data were processed to fit the nature of parameters of Maeda's articulatory model. Inversion experiments were conducted

    Inter Speaker variability of labial coarticulation with the view of developing a formal coarticulation model for French

    Get PDF
    International audienceExplaining the effects of labial coarticulation is a difficult problem that gave rise to many studies and models. Most of the time, small corpora were exploited to design these models. In this paper we describe the realization and exploitation of a corpus with ten speakers. This corpus enabls the most invariant labial features (protrusion, stretching and lip opening) to be established. Then we propose a formal prediction algorithm that relies on a standard phonetic description of french phonemes. We conducte a first evaluation of this algorithm that shows its relevancy

    HMM-based Automatic Visual Speech Segmentation Using Facial Data

    Get PDF
    International audienceWe describe automatic visual speech segmentation using facial data captured by a stereo-vision technique. The segmentation is performed using an HMM-based forced alignment mechanism widely used in automatic speech recognition. The idea is based on the assumption that using visual speech data alone for the training might capture the uniqueness in the facial compo- nent of speech articulation, asynchrony (time lags) in visual and acoustic speech segments and significant coarticulation effects. This should provide valuable information that helps to show the extent to which a phoneme may affect surrounding phonemes visually. This should provide information valuable in labeling the visual speech segments based on dominant coarticulatory contexts

    Design of a Controlled Language for Critical Infrastructures Protection

    Get PDF
    We describe a project for the construction of controlled language for critical infrastructures protection (CIP). This project originates from the need to coordinate and categorize the communications on CIP at the European level. These communications can be physically represented by official documents, reports on incidents, informal communications and plain e-mail. We explore the application of traditional library science tools for the construction of controlled languages in order to achieve our goal. Our starting point is an analogous work done during the sixties in the field of nuclear science known as the Euratom Thesaurus.JRC.G.6-Security technology assessmen

    Proceedings of the 7th international conference on disability, virtual reality and associated technologies, with ArtAbilitation (ICDVRAT 2008)

    Get PDF
    The proceedings of the conferenc

    Perceptual Experience

    Get PDF
    This book offers an account of perceptual experience—its intrinsic nature, its engagement with the world, its relations to mental states of other kinds, and its role in epistemic norms. One of the book’s main claims is that perceptual experience constitutively involves representations of worldly items. A second claim is that the relevant form of representation can be explained in broadly biological terms. After defending these foundational doctrines, the book proceeds to give an account of perceptual appearances and how they are related to the objective world. Appearances turn out to be relational, viewpoint dependent properties of external objects. There is also a complementary account of how the objects that possess these properties are represented. Another major concern is the phenomenological dimension of perception. The book maintains that perceptual phenomenology can be explained reductively in terms of the representational contents of experiences, and it uses this doctrine to undercut the traditional arguments for dualism. This treatment of perceptual phenomenology is then expanded to encompass cognitive phenomenology, the phenomenology of moods and emotions, and the phenomenology of pain. The next topic is the various forms of consciousness that perceptual experience can possess. A principal aim is to show that phenomenology is metaphysically independent of these forms of consciousness, and another is to de-mystify the form known as phenomenal consciousness. The book concludes by discussing the relations of various kinds that perceptual experiences bear to higher level cognitive states, including relations of format, content, and justification or support

    Gender differences in spatial ability within virtual reality

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore