5 research outputs found

    A new visual speech modelling approach for visual speech recognition

    Get PDF
    In this paper we propose a new learning-based representation that is referred to as Visual Speech Unit (VSU) for visual speech recognition (VSR). The new Visual Speech Unit concept proposes an extension of the standard viseme model that is currently applied for VSR by including in this representation not only the data associated with the visemes, but also the transitory information between consecutive visemes. The developed speech recognition system consists of several computational stages: (a) lips segmentation, (b) construction of the Expectation-Maximization Principal Component Analysis (EM-PCA) manifolds from the input video image, (c) registration between the models of the VSUs and the EM-PCA data constructed from the input image sequence and (d) recognition of the VSUs using a standard Hidden Markov Model (HMM) classification scheme. In this paper we were particularly interested to evaluate the classification accuracy obtained for our new VSU models when compared with that attained for standard (MPEG-4) viseme models. The experimental results indicate that we achieved 90% recognition rate when the system has been applied to the identification of 60 classes of VSUs, while the recognition rate for the standard set of MPEG-4 visemes was only 52%

    The application of manifold based visual speech units for visual speech recognition

    Get PDF
    This dissertation presents a new learning-based representation that is referred to as a Visual Speech Unit for visual speech recognition (VSR). The automated recognition of human speech using only features from the visual domain has become a significant research topic that plays an essential role in the development of many multimedia systems such as audio visual speech recognition(AVSR), mobile phone applications, human-computer interaction (HCI) and sign language recognition. The inclusion of the lip visual information is opportune since it can improve the overall accuracy of audio or hand recognition algorithms especially when such systems are operated in environments characterized by a high level of acoustic noise. The main contribution of the work presented in this thesis is located in the development of a new learning-based representation that is referred to as Visual Speech Unit for Visual Speech Recognition (VSR). The main components of the developed Visual Speech Recognition system are applied to: (a) segment the mouth region of interest, (b) extract the visual features from the real time input video image and (c) to identify the visual speech units. The major difficulty associated with the VSR systems resides in the identification of the smallest elements contained in the image sequences that represent the lip movements in the visual domain. The Visual Speech Unit concept as proposed represents an extension of the standard viseme model that is currently applied for VSR. The VSU model augments the standard viseme approach by including in this new representation not only the data associated with the articulation of the visemes but also the transitory information between consecutive visemes. A large section of this thesis has been dedicated to analysis the performance of the new visual speech unit model when compared with that attained for standard (MPEG- 4) viseme models. Two experimental results indicate that: 1. The developed VSR system achieved 80-90% correct recognition when the system has been applied to the identification of 60 classes of VSUs, while the recognition rate for the standard set of MPEG-4 visemes was only 62-72%. 2. 15 words are identified when VSU and viseme are employed as the visual speech element. The accuracy rate for word recognition based on VSUs is 7%-12% higher than the accuracy rate based on visemes

    Adaptive audio-visuelle Synthese : Automatische Trainingsverfahren für Unit-Selection-basierte audio-visuelle Sprachsynthese

    Get PDF
    In dieser Arbeit wurden Algorithmen und Verfahren entwickelt und angewendet, die es ermöglichen eine video-realistische audio-visuelle Synthese durchzuführen. Das generierte audio-visuelle Signal zeigt einen Talking-Head, der aus zuvor aufgenommenen Videodaten und einem zugrunde liegenden TTS-System konstruiert wurde. Die Arbeit gliedert sich in drei Teile: statistische Lernverfahren Verfahren, konkatenative Sprachsynthese sowie video-realistische audio-visuelle Synthese. Bei dem entwickelten Sprachsynthese System wird die Verkettung natürlichsprachlicher Einheiten verwendet. Die ist gemeinhin als Unit-Selection-basierte Text-to-Speech bekannt. Das Verfahren der Auswahl und Konkatenation wird ebenso für die visuelle Synthese verwendet, wobei hier auf natürliche Videosequenzen zurückgegriffen wird. Als statistische Lernverfahren werden vor allem Graphen-basierte Verfahren entwickelt und angewendet. Hier ist der Einsatz von Hidden-Markov Modellen und bedingten Zufallsfeldern (Conditional-Random-Fields) hervorgehoben, die zur Auswahl der geeigneten Sprachrepresentationseinheiten dienen. Bei der visuellen Synthese kommt ein Prototypen-basiertes Lernverfahren zum Einsatz, welches weithin als K-Nächster-Nachbar Algorithmus bekannt ist. Das Training des Systems benötigt ein annotiertes Sprachdatenkorpus, sowie ein annotiertes Videodatenkorpus. Zur Evaluation der eingesetzten Verfahren wurde eine video-realistische audio-visuelle Synthese Software entwickelt, welche vollautomatisch die Texteingabe in die gewünschte Videosequenz umsetzt. Alle Schritte bis zur Signalausgabe bedürfen keinerlei manuellen Eingriffs

    Automatic Selection of Visemes for Image-based Visual Speech Synthesis

    No full text
    An image-based approach provides an efficient way for visual speech synthesis. In an image-based visual speech synthesis system, a few lip images, namely visemes, are used for generating an arbitrary new sentence. Many approaches select visemes manually. In this paper, we propose a method for a system to automatically select visemes by minimizing the synthesis error. The feasibility of the proposed method has been demonstrated by experiments. We describe an application of image-based visual speech synthesis to a multimodal communication agent for a translation task where two people, who speak different languages, can talk to each other over the Internet. 1
    corecore