Article thumbnail

Phoneme recognition of meetings using audio-visual data

By Petr Motlíček


The movement of speaker’s faces are known to convey visual information that can improve speech intelligibility especially in case of somehow corrupted or noisy data. Therefore, availability of visual data could be exploited to enhance automatic speech recognition task. This paper demonstrates the use of visual parameters extracted from video for automatic recognition of context-independent phoneme strings from meeting data. Encouraged by the good performance of audio-visual systems utilized to work with ”visually clean ” data (limited variation in the speaker’s frontal pose, lighting conditions, background, etc.), we investigate their efficiency in non-ideal conditions which are introduced by meeting audio-visual data employed in our experiments. A major issue is the phoneme recognition task based on combination of the audio and visual data so that the best use can be made of the two modalities together. 1

Year: 2004
OAI identifier: oai:CiteSeerX.psu:
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • (external link)
  • (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.