17 research outputs found
Testing AutoTrace: A Machine-learning Approach to Automated Tongue Contour Data Extraction
The Programme and Abstract booklet can be viewed at: http://www.qmu.ac.uk/casl/conf/ultrafest_2013/docs/Ultrafest%20abstract%20booklet.pdfOral Presentationpublished_or_final_versio
Using a biomechanical model for tongue tracking in ultrasound images
International audienceWe propose in this paper a new method for tongue tracking in ultrasound images which is based on a biomechanical model of the tongue. The deformation is guided both by points tracked at the surface of the tongue and by inner points of the tongue. Possible uncertainties on the tracked points are handled by the algorithm. Experiments prove that the method is efficient even in case of abrupt movements
Szinkronizált beszéd- és nyelvultrahang-felvételek a SonoSpeech rendszerrel
Kivonat: A jelen ismertetĂ©s az MTA–ELTE Lingvális ArtikuláciĂł KutatĂłcsoport ultrahangos vizsgálatainak technikai hátterĂ©t, az alkalmazott hardver- Ă©s szoftverkörnyezetet, illetĹ‘leg a folyĂł Ă©s tervezett kutatásokat mutatja be. A magyar Ă©s nemzetközi szakirodalmi elĹ‘zmĂ©nyek tárgyalása után ismerteti az ultrahangnak mint az artikuláciĂł vizsgálatában alkalmazott eszköznek a sajátosságait, összevetve más kĂsĂ©rleti eszközökkel Ă©s mĂłdszertanokkal. KitĂ©r a kutatási nehĂ©zsĂ©gekre is, mint pĂ©ldául az ultrahangkĂ©p beszĂ©lĹ‘fĂĽggĹ‘ minĹ‘sĂ©ge, a nyelvkontĂşr manuális Ă©s automatikus meghatározása, vĂ©gĂĽl bemutatja a kutatĂłcsoport fĹ‘bb cĂ©ljait Ă©s terveit, mind az alap-, mind pedig az alkalmazott kutatások terĂĽletĂ©n
Beyond the edge: Markerless pose estimation of speech articulators from ultrasound and camera images using DeepLabCut
Automatic feature extraction from images of speech articulators is currently achieved by detecting edges. Here, we investigate the use of pose estimation deep neural nets with transfer learning to perform markerless estimation of speech articulator keypoints using only a few hundred hand-labelled images as training input. Midsagittal ultrasound images of the tongue, jaw, and hyoid and camera images of the lips were hand-labelled with keypoints, trained using DeepLabCut and evaluated on unseen speakers and systems. Tongue surface contours interpolated from estimated and hand-labelled keypoints produced an average mean sum of distances (MSD) of 0.93, s.d. 0.46 mm, compared with 0.96, s.d. 0.39 mm, for two human labellers, and 2.3, s.d. 1.5 mm, for the best performing edge detection algorithm. A pilot set of simultaneous electromagnetic articulography (EMA) and ultrasound recordings demonstrated partial correlation among three physical sensor positions and the corresponding estimated keypoints and requires further investigation. The accuracy of the estimating lip aperture from a camera video was high, with a mean MSD of 0.70, s.d. 0.56, mm compared with 0.57, s.d. 0.48 mm for two human labellers. DeepLabCut was found to be a fast, accurate and fully automatic method of providing unique kinematic data for tongue, hyoid, jaw, and lips.https://doi.org/10.3390/s2203113322pubpub