3D tongue motion visualization based on ultrasound image sequences

Abstract

Abstract The article proposes a real-time technique for visualizing tongue motion driven by ultrasound image sequences. Local feature description is used to follow characteristic speckle patterns in a set of mid-sagittal contour points in an ultrasound image sequence, which are then used as markers for describing movements of the tongue. A 3D tongue model is subsequently driven by the motion data extracted from the ultrasound image sequences. The "modal warping" technique is used for real-time tongue deformation visualization. The resulting system will be useful in a variety of domains including speech production study, articulation training, educational scenarios, etc. Some parts of the interface are still being developed; we will show preliminary results in the demonstration

    Similar works

    Full text

    thumbnail-image

    Available Versions