526 research outputs found
Nonparallel Emotional Speech Conversion
We propose a nonparallel data-driven emotional speech conversion method. It
enables the transfer of emotion-related characteristics of a speech signal
while preserving the speaker's identity and linguistic content. Most existing
approaches require parallel data and time alignment, which is not available in
most real applications. We achieve nonparallel training based on an
unsupervised style transfer technique, which learns a translation model between
two distributions instead of a deterministic one-to-one mapping between paired
examples. The conversion model consists of an encoder and a decoder for each
emotion domain. We assume that the speech signal can be decomposed into an
emotion-invariant content code and an emotion-related style code in latent
space. Emotion conversion is performed by extracting and recombining the
content code of the source speech and the style code of the target emotion. We
tested our method on a nonparallel corpora with four emotions. Both subjective
and objective evaluations show the effectiveness of our approach.Comment: Published in INTERSPEECH 2019, 5 pages, 6 figures. Simulation
available at http://www.jian-gao.org/emoga
Modern Multivariate Methods for Accurate Dialect Classification
We perform discriminant analysis together with principal component analysis on dialect and accent recognition. Since the data matrix exhibits high dimension low sample size feature, we calculate the principal components and the score matrix based on the dual space. Given the transformed score matrix, linear discriminant model does not fit the data well, while quadratic discriminant model, the superior model comparing to LDA, may fail sometimes when large number of principal components are required. Using the Gaussian radial basis function kernel, we calculate the kernel matrix and perform LDA directly on it. Comparing the LDA-PCA method, the in-sample prediction error rate of LDA reduces by more than 20% on average
Computational Language Assessment in patients with speech, language, and communication impairments
Speech, language, and communication symptoms enable the early detection,
diagnosis, treatment planning, and monitoring of neurocognitive disease
progression. Nevertheless, traditional manual neurologic assessment, the speech
and language evaluation standard, is time-consuming and resource-intensive for
clinicians. We argue that Computational Language Assessment (C.L.A.) is an
improvement over conventional manual neurological assessment. Using machine
learning, natural language processing, and signal processing, C.L.A. provides a
neuro-cognitive evaluation of speech, language, and communication in elderly
and high-risk individuals for dementia. ii. facilitates the diagnosis,
prognosis, and therapy efficacy in at-risk and language-impaired populations;
and iii. allows easier extensibility to assess patients from a wide range of
languages. Also, C.L.A. employs Artificial Intelligence models to inform theory
on the relationship between language symptoms and their neural bases. It
significantly advances our ability to optimize the prevention and treatment of
elderly individuals with communication disorders, allowing them to age
gracefully with social engagement.Comment: 36 pages, 2 figures, to be submite
A Review of Deep Learning Techniques for Speech Processing
The field of speech processing has undergone a transformative shift with the
advent of deep learning. The use of multiple processing layers has enabled the
creation of models capable of extracting intricate features from speech data.
This development has paved the way for unparalleled advancements in speech
recognition, text-to-speech synthesis, automatic speech recognition, and
emotion recognition, propelling the performance of these tasks to unprecedented
heights. The power of deep learning techniques has opened up new avenues for
research and innovation in the field of speech processing, with far-reaching
implications for a range of industries and applications. This review paper
provides a comprehensive overview of the key deep learning models and their
applications in speech-processing tasks. We begin by tracing the evolution of
speech processing research, from early approaches, such as MFCC and HMM, to
more recent advances in deep learning architectures, such as CNNs, RNNs,
transformers, conformers, and diffusion models. We categorize the approaches
and compare their strengths and weaknesses for solving speech-processing tasks.
Furthermore, we extensively cover various speech-processing tasks, datasets,
and benchmarks used in the literature and describe how different deep-learning
networks have been utilized to tackle these tasks. Additionally, we discuss the
challenges and future directions of deep learning in speech processing,
including the need for more parameter-efficient, interpretable models and the
potential of deep learning for multimodal speech processing. By examining the
field's evolution, comparing and contrasting different approaches, and
highlighting future directions and challenges, we hope to inspire further
research in this exciting and rapidly advancing field
Using State-of-the-Art Speech Models to Evaluate Oral Reading Fluency in Ghana
This paper reports on a set of three recent experiments utilizing large-scale
speech models to evaluate the oral reading fluency (ORF) of students in Ghana.
While ORF is a well-established measure of foundational literacy, assessing it
typically requires one-on-one sessions between a student and a trained
evaluator, a process that is time-consuming and costly. Automating the
evaluation of ORF could support better literacy instruction, particularly in
education contexts where formative assessment is uncommon due to large class
sizes and limited resources. To our knowledge, this research is among the first
to examine the use of the most recent versions of large-scale speech models
(Whisper V2 wav2vec2.0) for ORF assessment in the Global South.
We find that Whisper V2 produces transcriptions of Ghanaian students reading
aloud with a Word Error Rate of 13.5. This is close to the model's average WER
on adult speech (12.8) and would have been considered state-of-the-art for
children's speech transcription only a few years ago. We also find that when
these transcriptions are used to produce fully automated ORF scores, they
closely align with scores generated by expert human graders, with a correlation
coefficient of 0.96. Importantly, these results were achieved on a
representative dataset (i.e., students with regional accents, recordings taken
in actual classrooms), using a free and publicly available speech model out of
the box (i.e., no fine-tuning). This suggests that using large-scale speech
models to assess ORF may be feasible to implement and scale in lower-resource,
linguistically diverse educational contexts
Current trends in multilingual speech processing
In this paper, we describe recent work at Idiap Research Institute in the domain of multilingual speech processing and provide some insights into emerging challenges for the research community. Multilingual speech processing has been a topic of ongoing interest to the research community for many years and the field is now receiving renewed interest owing to two strong driving forces. Firstly, technical advances in speech recognition and synthesis are posing new challenges and opportunities to researchers. For example, discriminative features are seeing wide application by the speech recognition community, but additional issues arise when using such features in a multilingual setting. Another example is the apparent convergence of speech recognition and speech synthesis technologies in the form of statistical parametric methodologies. This convergence enables the investigation of new approaches to unified modelling for automatic speech recognition and text-to-speech synthesis (TTS) as well as cross-lingual speaker adaptation for TTS. The second driving force is the impetus being provided by both government and industry for technologies to help break down domestic and international language barriers, these also being barriers to the expansion of policy and commerce. Speech-to-speech and speech-to-text translation are thus emerging as key technologies at the heart of which lies multilingual speech processin
- …