907 research outputs found
Recommended from our members
Deep Learning for Automatic Assessment and Feedback of Spoken English
Growing global demand for learning a second language (L2), particularly English, has led to
considerable interest in automatic spoken language assessment, whether for use in computerassisted language learning (CALL) tools or for grading candidates for formal qualifications.
This thesis presents research conducted into the automatic assessment of spontaneous nonnative English speech, with a view to be able to provide meaningful feedback to learners. One
of the challenges in automatic spoken language assessment is giving candidates feedback on
particular aspects, or views, of their spoken language proficiency, in addition to the overall
holistic score normally provided. Another is detecting pronunciation and other types of errors
at the word or utterance level and feeding them back to the learner in a useful way.
It is usually difficult to obtain accurate training data with separate scores for different
views and, as examiners are often trained to give holistic grades, single-view scores can
suffer issues of consistency. Conversely, holistic scores are available for various standard
assessment tasks such as Linguaskill. An investigation is thus conducted into whether
assessment scores linked to particular views of the speaker’s ability can be obtained from
systems trained using only holistic scores.
End-to-end neural systems are designed with structures and forms of input tuned to single
views, specifically each of pronunciation, rhythm, intonation and text. By training each
system on large quantities of candidate data, individual-view information should be possible
to extract. The relationships between the predictions of each system are evaluated to examine
whether they are, in fact, extracting different information about the speaker. Three methods
of combining the systems to predict holistic score are investigated, namely averaging their
predictions and concatenating and attending over their intermediate representations. The
combined graders are compared to each other and to baseline approaches.
The tasks of error detection and error tendency diagnosis become particularly challenging
when the speech in question is spontaneous and particularly given the challenges posed by
the inconsistency of human annotation of pronunciation errors. An approach to these tasks is
presented by distinguishing between lexical errors, wherein the speaker does not know how a
particular word is pronounced, and accent errors, wherein the candidate’s speech exhibits
consistent patterns of phone substitution, deletion and insertion. Three annotated corpora
x
of non-native English speech by speakers of multiple L1s are analysed, the consistency of
human annotation investigated and a method presented for detecting individual accent and
lexical errors and diagnosing accent error tendencies at the speaker level
Automatic Pronunciation Assessment -- A Review
Pronunciation assessment and its application in computer-aided pronunciation
training (CAPT) have seen impressive progress in recent years. With the rapid
growth in language processing and deep learning over the past few years, there
is a need for an updated review. In this paper, we review methods employed in
pronunciation assessment for both phonemic and prosodic. We categorize the main
challenges observed in prominent research trends, and highlight existing
limitations, and available resources. This is followed by a discussion of the
remaining challenges and possible directions for future work.Comment: 9 pages, accepted to EMNLP Finding
Analysis Of Variation In The Number Of MFCC Features In Contrast To LSTM In The Classification Of English Accent Sounds
Various studies have been carried out to classify English accents using traditional classifiers and modern classifiers. In general, research on voice classification and voice recognition that has been done previously uses the MFCC method as voice feature extraction. The stages in this study began with importing datasets, data preprocessing of datasets, then performing MFCC feature extraction, conducting model training, testing model accuracy and displaying a confusion matrix on model accuracy. After that, an analysis of the classification has been carried out. The overall results of the 10 tests on the test set show the highest accuracy value for feature 17 value of 64.96% in the test results obtained some important information, including; The test results on the MFCC coefficient values of twelve to twenty show overfitting. This is shown in the model training process which repeatedly produces high accuracy but produces low accuracy in the classification testing process. The feature assignment on MFCC shows that the higher the feature value assignment on MFCC causes a very large sound feature dimension. With the large number of features obtained, the MFCC method has a weakness in determining the number of features
Transformer-Based Multi-Aspect Multi-Granularity Non-Native English Speaker Pronunciation Assessment
Automatic pronunciation assessment is an important technology to help
self-directed language learners. While pronunciation quality has multiple
aspects including accuracy, fluency, completeness, and prosody, previous
efforts typically only model one aspect (e.g., accuracy) at one granularity
(e.g., at the phoneme-level). In this work, we explore modeling multi-aspect
pronunciation assessment at multiple granularities. Specifically, we train a
Goodness Of Pronunciation feature-based Transformer (GOPT) with multi-task
learning. Experiments show that GOPT achieves the best results on
speechocean762 with a public automatic speech recognition (ASR) acoustic model
trained on Librispeech.Comment: Accepted at ICASSP 2022. Code at https://github.com/YuanGongND/gopt
Interactive Colab demo at
https://colab.research.google.com/github/YuanGongND/gopt/blob/master/colab/GOPT_GPU.ipynb
. ICASSP 202
A Hierarchical Context-aware Modeling Approach for Multi-aspect and Multi-granular Pronunciation Assessment
Automatic Pronunciation Assessment (APA) plays a vital role in
Computer-assisted Pronunciation Training (CAPT) when evaluating a second
language (L2) learner's speaking proficiency. However, an apparent downside of
most de facto methods is that they parallelize the modeling process throughout
different speech granularities without accounting for the hierarchical and
local contextual relationships among them. In light of this, a novel
hierarchical approach is proposed in this paper for multi-aspect and
multi-granular APA. Specifically, we first introduce the notion of sup-phonemes
to explore more subtle semantic traits of L2 speakers. Second, a depth-wise
separable convolution layer is exploited to better encapsulate the local
context cues at the sub-word level. Finally, we use a score-restraint attention
pooling mechanism to predict the sentence-level scores and optimize the
component models with a multitask learning (MTL) framework. Extensive
experiments carried out on a publicly-available benchmark dataset, viz.
speechocean762, demonstrate the efficacy of our approach in relation to some
cutting-edge baselines.Comment: Accepted to Interspeech 202
- …