9 research outputs found

    Exploring the use of ultrasound visual feedback in the classroom: a pilot study on the acquisition of selected English vowel contrasts by French learners.

    No full text
    International audienceUltrasound imaging can provide visualization of the major part of the tongue, a difficult-to-see articulator involved in the production of most speech sounds. Within the past decade, there has been a growing body of evidence to support the application of ultrasound in the field of L2 research and pedagogy (Gick et al., 2008). Typically, the existing literature examines the effect of ultrasound training on improving the pronunciation of a difficult sound production by using a series of complex training sessions either individually or in small groups (Tsui, 2012).In the present study, we aim to explore whether it is feasible to use ultrasound visual feedback also in a classroom setting and to test its effectiveness in facilitating speech sound remediation when learners are only exposed to a series of short-time training interventions. The experiment was carried out over one semester at the English department of the University of Paris 3, using a Seemore PI USB-powered ultrasound system. The participants were seven French first year undergraduate students in English with a CEFR level of either B1 or B2.The relevant English sound productions were the contrasts between the two high front vowels /iː/ and /ɪ/, and the front open vowel /æ/ and the central open vowel /ʌ/. Both oppositions are known to be problematic for French learners (Flege, 1995) as the French vowel inventory has only one unrounded high front vowel /i/ and one open front/central vowel /a/. All participants were recorded at beginning (pre-test) and two weeks after the end of the semester (post-test) in a reading task which included 10 repetitions of the target words beat, bit, bat and butt in carrier sentences as well as one recording of the speech accent archive text Please call Stella. Each speaker also recorded 10 repetitions of French control sentences with comparative test items. Each participant received a ten minute ultrasound training session during regular language laboratory classes on a fortnightly basis, five sessions in total. The students worked in pairs and the training consisted of a discussion to incorporate explicit awareness of the tongue movements associated with the target sounds and repeated practice of the vowels, both in isolation and in CVC syllables. For two of the speakers, we carried out an additional pre- and post-recording in one of the session in order to evaluate the possible immediate impact of the ultrasound coaching on the pronunciation performance. All acoustic recordings were subsequently semi-automatically aligned using the WebMAUS system and the first three formants of the target sounds were extracted using PRAAT. We are going to present the results of the pre- and post-recordings for all speakers, focusing on speaker- and vowel-specific differences, and will discuss the various advantages, problems and pedagogical implications likely to be encountered when using visual articulatory ultrasound feedback in the classroom. ReferencesFlege, J.E. (1995). Second language speech learning: Theory, findings, and problems. In W. Strange, Speech perception and linguistic experience: Issues in cross-language research, pp. 233-277. Baltimore: York Press.Gick, B., Bernhardt, B.M., Bacsfalvi, P. & Wilson, I. (2008). In J. Hansen & M. Zampini (eds.) Phonology and Second Language Acquisition. Ch. 11, pp. 309-322. Amsterdam: John Benjamins.Tsui, H. M. (2012). Ultrasound speech training for Japanese adults learning English as a second language. Unpublished MSc thesis, University of British Columbia. Retrieved from enunciate.arts.ubc.ca/research-and-case-studies/other-research/

    MATTONG : Une interface graphique sous MatLab pour le suivi du contour de la langue à partir d'images ultrasons

    No full text
    Journée de la parole 2014 - LeMansL'imagerie ultrason (US) est un outil de plus en plus utilisé dans le domaine de la parole car elle est non invasive et permet de visualiser un articulateur majeur dans la production vocale : la langue. Toutefois, sur ces images US, le contour de la langue doit y être identifié. MATTONG est une interface graphique conviviale développée sous MatLab permettant d'effectuer un suivi du contour de la langue à partir d'images ultrason. MATTONG reprend l'algorithme de suivi développé par (Li et al., 2005a), utilisé dans le logiciel EdgeTrak (Li et al., 2005b). La particularité de MATTONG est qu'il facilite l'utilisation de cet algorithme en apportant une interface graphique permettant d'accélérer les manipulations sur le contour de la langue et de les rendre plus fiables et reproductibles. Il intègre également plusieurs outils tels que des outils de filtrage d'images, la conversion de vidéos en série de fichiers image ou l'exportation des résultats sous divers formats. ABSTRACT MATTONG : MatLab(R) Tongue Tracker on Graphical User Interface Ultrasound (US) imaging is widely used in the speech domain since it is non-invasive and allows viewing of one important articulator in speech production : the tongue. On these US images, the tongue contour need to be located. MATTONG is a graphical interface developed with MatLab allowing to track the tongue contour on US images. MATTONG is based on the algorithm developed by (Li et al., 2005a) in the EdgeTrak software. The specificity of MATTONG is the following : it facilitates the use of this algorithm by adding a graphical interface enabling fast and easy tongue contours manipulations and making them more reliable and reproducible. It also proposes various tools such as image filtering, video file to images files or data exportation in different formats

    On the use of accelerometer sensors to study nasalization in speech and singing voice

    No full text
    International audienceThis paper presents first results of a study aiming to explore data coming from nose mounted accelerometer during speech and singing tasks. One objective was to study the variations in the piezoelectric signal under variable speech and singing voice productions. Thus, only high-pitch and high-level singing are considered in this study. Four speakers (2 males, 2 females) produced isolated vowels, CVC and VCV non-words in nasal and non-nasal consonantal contexts. Our results suggest that the discrimination of nasal consonants remains possible in singing voice. A second part of this study investigates the correlation between acoustic and piezoelectric signals in vocalic sounds. A relative stable transfer function, with a frequency dip at low frequency around 500 Hz could be measured in our data. Results highlight a relative stable transfer function between audio and accelerometer signal for the vowels

    3D tongue motion visualization based on ultrasound image sequences

    No full text
    Abstract The article proposes a real-time technique for visualizing tongue motion driven by ultrasound image sequences. Local feature description is used to follow characteristic speckle patterns in a set of mid-sagittal contour points in an ultrasound image sequence, which are then used as markers for describing movements of the tongue. A 3D tongue model is subsequently driven by the motion data extracted from the ultrasound image sequences. The "modal warping" technique is used for real-time tongue deformation visualization. The resulting system will be useful in a variety of domains including speech production study, articulation training, educational scenarios, etc. Some parts of the interface are still being developed; we will show preliminary results in the demonstration
    corecore