1,783 research outputs found
Automatic Speech Recognition for Low-Resource and Morphologically Complex Languages
The application of deep neural networks to the task of acoustic modeling for automatic speech recognition (ASR) has resulted in dramatic decreases of word error rates, allowing for the use of this technology in smart phones and personal home assistants in high-resource languages. Developing ASR models of this caliber, however, requires hundreds or thousands of hours of transcribed speech recordings, which presents challenges for most of the world’s languages. In this work, we investigate the applicability of three distinct architectures that have previously been used for ASR in languages with limited training resources. We tested these architectures using publicly available ASR datasets for several typologically and orthographically diverse languages, whose data was produced under a variety of conditions using different speech collection strategies, practices, and equipment. Additionally, we performed data augmentation on this audio, such that the amount of data could increase nearly tenfold, synthetically creating higher resource training. The architectures and their individual components were modified, and parameters explored such that we might find a best-fit combination of features and modeling schemas to fit a specific language morphology. Our results point to the importance of considering language-specific and corpus-specific factors and experimenting with multiple approaches when developing ASR systems for resource-constrained languages
SQuId: Measuring Speech Naturalness in Many Languages
Much of text-to-speech research relies on human evaluation, which incurs
heavy costs and slows down the development process. The problem is particularly
acute in heavily multilingual applications, where recruiting and polling judges
can take weeks. We introduce SQuId (Speech Quality Identification), a
multilingual naturalness prediction model trained on over a million ratings and
tested in 65 locales-the largest effort of this type to date. The main insight
is that training one model on many locales consistently outperforms mono-locale
baselines. We present our task, the model, and show that it outperforms a
competitive baseline based on w2v-BERT and VoiceMOS by 50.0%. We then
demonstrate the effectiveness of cross-locale transfer during fine-tuning and
highlight its effect on zero-shot locales, i.e., locales for which there is no
fine-tuning data. Through a series of analyses, we highlight the role of
non-linguistic effects such as sound artifacts in cross-locale transfer.
Finally, we present the effect of our design decision, e.g., model size,
pre-training diversity, and language rebalancing with several ablation
experiments.Comment: Accepted at ICASSP 2023, with additional material in the appendi
DPP-TTS: Diversifying prosodic features of speech via determinantal point processes
With the rapid advancement in deep generative models, recent neural
Text-To-Speech(TTS) models have succeeded in synthesizing human-like speech.
There have been some efforts to generate speech with various prosody beyond
monotonous prosody patterns. However, previous works have several limitations.
First, typical TTS models depend on the scaled sampling temperature for
boosting the diversity of prosody. Speech samples generated at high sampling
temperatures often lack perceptual prosodic diversity, which can adversely
affect the naturalness of the speech. Second, the diversity among samples is
neglected since the sampling procedure often focuses on a single speech sample
rather than multiple ones. In this paper, we propose DPP-TTS: a text-to-speech
model based on Determinantal Point Processes (DPPs) with a prosody diversifying
module. Our TTS model is capable of generating speech samples that
simultaneously consider perceptual diversity in each sample and among multiple
samples. We demonstrate that DPP-TTS generates speech samples with more
diversified prosody than baselines in the side-by-side comparison test
considering the naturalness of speech at the same time.Comment: EMNLP 202
Automatic Speech Recognition for Low-resource Languages and Accents Using Multilingual and Crosslingual Information
This thesis explores methods to rapidly bootstrap automatic speech recognition systems for languages, which lack resources for speech and language processing. We focus on finding approaches which allow using data from multiple languages to improve the performance for those languages on different levels, such as feature extraction, acoustic modeling and language modeling. Under application aspects, this thesis also includes research work on non-native and Code-Switching speech
Source side pre-ordering using recurrent neural networks for English-Myanmar machine translation
Word reordering has remained one of the challenging problems for machine translation when translating between language pairs with different word orders e.g. English and Myanmar. Without reordering between these languages, a source sentence may be translated directly with similar word order and translation can not be meaningful. Myanmar is a subject-objectverb (SOV) language and an effective reordering is essential for translation. In this paper, we applied a pre-ordering approach using recurrent neural networks to pre-order words of the source Myanmar sentence into target English’s word order. This neural pre-ordering model is automatically derived from parallel word-aligned data with syntactic and lexical features based on dependency parse trees of the source sentences. This can generate arbitrary permutations that may be non-local on the sentence and can be combined into English-Myanmar machine translation. We exploited the model to reorder English sentences into Myanmar-like word order as a preprocessing stage for machine translation, obtaining improvements quality comparable to baseline rule-based pre-ordering approach on asian language treebank (ALT) corpus
- …