337 research outputs found

    Automatic Pronunciation Assessment -- A Review

    Full text link
    Pronunciation assessment and its application in computer-aided pronunciation training (CAPT) have seen impressive progress in recent years. With the rapid growth in language processing and deep learning over the past few years, there is a need for an updated review. In this paper, we review methods employed in pronunciation assessment for both phonemic and prosodic. We categorize the main challenges observed in prominent research trends, and highlight existing limitations, and available resources. This is followed by a discussion of the remaining challenges and possible directions for future work.Comment: 9 pages, accepted to EMNLP Finding

    English Lexical Stress Recognition Using Recurrent Neural Networks

    Get PDF
    Lexical stress is an integral part of English pronunciation. The command of lexical stress has an effect on the perceived fluency of the speaker. Moreover, it serves as a cue to recognize words. Methods that can automatically recognize lexical stress in spoken audio can be used to help English learners improve their pronunciation. This thesis evaluated lexical stress recognition methods based on recurrent neural networks. The purpose was to compare two sets of features: a set of prosodic features making use of existing speech recognition technologies, and simple spectral features. Using the latter feature set would allow for an end-to-end model, significantly simplifying the overall process. The problem was formulated as one of locating the primary stress, the most prominently stressed syllable in the word, in an isolated word. Datasets of both native and non-native speech were used in the experiments. The results show that models using the prosodic features outperform models using the spectral features. The difference between the two was particularly stark on the non-native dataset. It is possible that the datasets were too small to enable training end-to-end models. There was a considerable variation in performance among different words. It was also observed that the presence of a secondary stress made it more difficult to detect the primary stress.Sanapaino on olennainen osa englannin kielen ääntämistä. Sen osaaminen vaikuttaa puhujan havaittuun sujuvuuteen, ja se toimii vihjeenä sanojen tunnistamiselle. Menetelmiä, joilla sanapaino voidaan automaattisesti tunnistaa puheesta, voidaan käyttää apuna englannin oppijoiden ääntämisen parantamisessa. Tämä diplomityö arvioi takaisinkytkeytyviin neuroverkkoihin perustuvia menetelmiä sanapainon tunnistukseen. Tarkoitus oli vertailla kahdenlaisia piirteitä: joukkoa prosodisia piirteitä, jotka hyödyntävät olemassa olevia puheentunnistusteknologioita, ja yksinkertaisia äänen spektriin perustuvia piirteitä. Jälkimmäisten piirteiden käyttö mahdollistaisi päästä-päähän -mallien käyttämisen, mikä yksinkertaistaisi kokonaisprosessia merkittävästi. Ongelma esitettiin muodossa, jossa tarkoitus oli löytää pääpainon sijainti, eli sanan voimakkaiten erottuva tavu, yksittäisestä sanasta. Tutkimuksessa käytettiin dataa sekä englantia äidinkielenään että ei-äidinkielenään puhuvilta. Tulosten mukaan prosodisia piirteitä käyttävät mallit suoriutuvat tehtävästä paremmin kuin äänen spektriin perustuvia piirteitä käyttävät mallit. Erot olivat erityisen suuria datajoukossa, joka koostui englantia ei-äidinkielenään puhuvien puheesta. On mahdollista, että käytetyt datajoukot olivat liian pieniä päästä-päähän -mallien opettamista varten. Mallien suorituskyvyssä oli huomattavaa vaihtelua eri sanojen välillä. Tutkimuksessa havaittiin myös, että sivupainon läsnäolo vaikeutti pääpainon tunnistamista

    Automatic Screening of Childhood Speech Sound Disorders and Detection of Associated Pronunciation Errors

    Full text link
    Speech disorders in children can affect their fluency and intelligibility. Delay in their diagnosis and treatment increases the risk of social impairment and learning disabilities. With the significant shortage of Speech and Language Pathologists (SLPs), there is an increasing interest in Computer-Aided Speech Therapy tools with automatic detection and diagnosis capability. However, the scarcity and unreliable annotation of disordered child speech corpora along with the high acoustic variations in the child speech data has impeded the development of reliable automatic detection and diagnosis of childhood speech sound disorders. Therefore, this thesis investigates two types of detection systems that can be achieved with minimum dependency on annotated mispronounced speech data. First, a novel approach that adopts paralinguistic features which represent the prosodic, spectral, and voice quality characteristics of the speech was proposed to perform segment- and subject-level classification of Typically Developing (TD) and Speech Sound Disordered (SSD) child speech using a binary Support Vector Machine (SVM) classifier. As paralinguistic features are both language- and content-independent, they can be extracted from an unannotated speech signal. Second, a novel Mispronunciation Detection and Diagnosis (MDD) approach was introduced to detect the pronunciation errors made due to SSDs and provide low-level diagnostic information that can be used in constructing formative feedback and a detailed diagnostic report. Unlike existing MDD methods where detection and diagnosis are performed at the phoneme level, the proposed method achieved MDD at the speech attribute level, namely the manners and places of articulations. The speech attribute features describe the involved articulators and their interactions when making a speech sound allowing a low-level description of the pronunciation error to be provided. Two novel methods to model speech attributes are further proposed in this thesis, a frame-based (phoneme-alignment) method leveraging the Multi-Task Learning (MTL) criterion and training a separate model for each attribute, and an alignment-free jointly-learnt method based on the Connectionist Temporal Classification (CTC) sequence to sequence criterion. The proposed techniques have been evaluated using standard and publicly accessible adult and child speech corpora, while the MDD method has been validated using L2 speech corpora

    The Processing of Emotional Sentences by Young and Older Adults: A Visual World Eye-movement Study

    Get PDF
    Carminati MN, Knoeferle P. The Processing of Emotional Sentences by Young and Older Adults: A Visual World Eye-movement Study. Presented at the Architectures and Mechanisms of Language and Processing (AMLaP), Riva del Garda, Italy
    corecore