2 research outputs found

    Cued Speech Gesture Recognition: A First Prototype Based on Early Reduction

    No full text
    International audienceCued Speech is a specific linguistic code for hearing-impaired people. It is based on both lip reading and manual gestures. In the context of THIMP (Telephony for the Hearing-IMpaired Project), we work on automatic cued speech translation. In this paper, we only address the problem of automatic cued speech manual gesture recognition. Such a gesture recognition issue is really common from a theoretical point of view, but we approach it with respect to its particularities in order to derive an original method. This method is essentially built around a bioinspired method called early reduction. Prior to a complete analysis of each image of a sequence, the early reduction process automatically extracts a restricted number of key images which summarize the whole sequence. Only the key images are studied from a temporal point of view with lighter computation than the complete sequenc

    Extracting static hand gestures in dynamic context

    No full text
    International audienceCued Speech is a specific visual coding that complements oral language lip-reading, by adding static hand gestures (a static gesture can be presented on a single photograph as it contains no motion). By nature, Cued Speech is simple enough to be believed as automatically recognizable. Unfortunately, despite its static definition, fluent Cued Speech has an important dynamic dimension due to co-articulation. Hence, the reduction from a continuous Cued Speech coding stream to the corresponding discrete chain of static gestures is really an issue for automatic Cued Speech processing. We present here how the biological motion analysis method presented in [1] has been combined with a fusion strategy based on the Belief Theory in order to perform such a reduction
    corecore