10 research outputs found
Recommended from our members
On the Acoustical and Perceptual Features of Vowel Nasality
Although much is known about the linguistic function of vowel nasality, either contrastive (as in French) or coarticulatory (as in English), less is known about its perception. This study uses careful examination of production patterns, along with data from both machine learning and human listeners to establish which acoustical features are useful (and used) for identifying vowel nasality.
A corpus of 4,778 oral and nasal or nasalized vowels in English and French was collected, and feature data for 29 potential perceptual features was extracted. A series of Linear Mixed-Effects Regressions showed 7 promising features with large oral-to-nasal feature differences, and highlighted some cross-linguistic differences in the relative importance of these features. Two machine learning algorithms, Support Vector Machines and RandomForests, were trained on this data to identify features or feature groupings that were most effective at predicting nasality token-by-token in each language. The list of promising features was thus narrowed to four: A1-P0, Vowel Duration, Spectral Tilt, and Formant Frequency/Bandwidth.
These four features were manipulated in vowels in oral and nasal contexts in English, adding nasal features to oral vowels and reducing nasal features in nasalized vowels, in an attempt to influence oral/nasal classification. These stimuli were presented to native English listeners in a lexical choice task with phoneme masking, measuring oral/nasal classification accuracy and reaction time. Only modifications to vowel formant structure caused any perceptual change for listeners, resulting in increased reaction times, as well as increased oral/nasal confusion in the oral-to-nasal (feature addition) stimuli. Classification of already-nasal vowels was not affected by any modifications, suggesting a perceptual role for other acoustical characteristics alongside nasality-specific cues. A Support Vector Machine trained on the same stimuli showed a similar pattern of sensitivity to the experimental modifications.
Thus, based on both the machine learning and human perception results, formant structure, particularly F1 bandwidth, appears to be the primary cue to the perception of nasality in English. This close relationship of nasal- and oral-cavity derived acoustical cues leads to a strong perceptual role for both the oral and nasal aspects of nasal vowels
Recommended from our members
Coarticulation and Contrast: Neighborhood Density Conditioned Phonetic Variation in French
Recommended from our members
Automatic tongue contour extraction in ultrasound images with convolutional neural networks
Recommended from our members
Plosive voicing in Afrikaans: Differential cue weighting and tonogenesis
Recommended from our members
The time course of individuals’ perception of coarticulatory information is linked to their production: Implications for sound change: Supplemental Material
2021 OE Week: UC Faculty Authors of Open Access Educational Resources: Taking Action for a More Equitable and Affordable Student Experience
March 8, 2021 on zoom at 4:00 pm -5:30 pm PST https://knit.ucsd.edu/scholcomm/2021/02/01/2021-open-education-week-events/
Faculty concerned about high-priced educational materials that do not meet their curricular needs have taken to creating open access educational resources to reduce cost for their students, to have more control over the content they assign in class, and to provide accessible resources to learners and institutions around the world. Learn from faculty who use and create open educational resources, what support they have in the process, and about the impact on their pedagogical practices and their students’ academic and personal well-being. We've archived the recording, presentation slides, slide transcript, and a resources page