3 research outputs found

    A New Re-synchronization Method based Multi-modal Fusion for Automatic Continuous Cued Speech Recognition

    Get PDF
    Cued Speech (CS) is an augmented lip reading complemented by hand coding, and it is very helpful to the deaf people. Automatic CS recognition can help communications between the deaf people and others. Due to the asynchronous nature of lips and hand movements, fusion of them in automatic CS recognition is a challenging problem. In this work, we propose a novel re-synchronization procedure for multi-modal fusion, which aligns the hand features with lips feature. It is realized by delaying hand position and hand shape with their optimal hand preceding time which is derived by investigating the temporal organizations of hand position and hand shape movements in CS. This re-synchronization procedure is incorporated into a practical continuous CS recognition system that combines convolutional neural network (CNN) with multi-stream hidden markov model (MSHMM). A significant improvement of about 4.6% has been achieved retaining 76.6% CS phoneme recognition correctness compared with the state-of-the-art architecture (72.04%), which did not take into account the asynchrony issue of multi-modal fusion in CS. To our knowledge, this is the first work to tackle the asynchronous multi-modal fusion in the automatic continuous CS recognition

    Inner Lips Parameter Estimation based on Adaptive Ellipse Model

    No full text
    International audienceIn this paper, a novel automatic method using an adaptive ellipse model to estimate inner lips parameters (inner lips width A and height B) of speakers without any artifices is presented. Color based image processing is first applied to delimit preliminary inner lips. A single discontinuity elimination combining horizontal and vertical filling are used to obtain a binary inner lips image as complete as possible. After the pre-processing steps, an optimal adaptive ellipse is determined to match the inner lips, giving A and B parameters. The proposed method is evaluated on 4693 images of three French speakers including one Cued Speech (CS) speaker. It obtains RMSE of 3.37 mm for A parameter and of 0.84 mm for B parameter which outperform the baseline of inner lips parameter estimation in the state of the art. Moreover, CS recognition based on 34 French phonemes shows that using the estimated two parameters achieves an accuracy which is comparable to that using raw lips ROI
    corecore