673 research outputs found
Towards Zero-shot Learning for Automatic Phonemic Transcription
Automatic phonemic transcription tools are useful for low-resource language
documentation. However, due to the lack of training sets, only a tiny fraction
of languages have phonemic transcription tools. Fortunately, multilingual
acoustic modeling provides a solution given limited audio training data. A more
challenging problem is to build phonemic transcribers for languages with zero
training data. The difficulty of this task is that phoneme inventories often
differ between the training languages and the target language, making it
infeasible to recognize unseen phonemes. In this work, we address this problem
by adopting the idea of zero-shot learning. Our model is able to recognize
unseen phonemes in the target language without any training data. In our model,
we decompose phonemes into corresponding articulatory attributes such as vowel
and consonant. Instead of predicting phonemes directly, we first predict
distributions over articulatory attributes, and then compute phoneme
distributions with a customized acoustic model. We evaluate our model by
training it using 13 languages and testing it using 7 unseen languages. We find
that it achieves 7.7% better phoneme error rate on average over a standard
multilingual model.Comment: AAAI 202
Universal Automatic Phonetic Transcription into the International Phonetic Alphabet
This paper presents a state-of-the-art model for transcribing speech in any
language into the International Phonetic Alphabet (IPA). Transcription of
spoken languages into IPA is an essential yet time-consuming process in
language documentation, and even partially automating this process has the
potential to drastically speed up the documentation of endangered languages.
Like the previous best speech-to-IPA model (Wav2Vec2Phoneme), our model is
based on wav2vec 2.0 and is fine-tuned to predict IPA from audio input. We use
training data from seven languages from CommonVoice 11.0, transcribed into IPA
semi-automatically. Although this training dataset is much smaller than
Wav2Vec2Phoneme's, its higher quality lets our model achieve comparable or
better results. Furthermore, we show that the quality of our universal
speech-to-IPA models is close to that of human annotators.Comment: 5 pages, 7 table
Open-vocabulary keyword spotting in any language through multilingual contrastive speech-phoneme pretraining
In this paper, we introduce a massively multilingual speech corpora with
fine-grained phonemic transcriptions, encompassing more than 115 languages from
diverse language families. Based on this multilingual dataset, we propose
CLAP-IPA, a multilingual phoneme-speech contrastive embedding model capable of
open-vocabulary matching between speech signals and phonemically transcribed
keywords or arbitrary phrases. The proposed model has been tested on two
fieldwork speech corpora in 97 unseen languages, exhibiting strong
generalizability across languages. Comparison with a text-based model shows
that using phonemes as modeling units enables much better crosslinguistic
generalization than orthographic texts.Comment: Preprint; Work in Progres
Universal Phone Recognition with a Multilingual Allophone System
Multilingual models can improve language processing, particularly for low
resource situations, by sharing parameters across languages. Multilingual
acoustic models, however, generally ignore the difference between phonemes
(sounds that can support lexical contrasts in a particular language) and their
corresponding phones (the sounds that are actually spoken, which are language
independent). This can lead to performance degradation when combining a variety
of training languages, as identically annotated phonemes can actually
correspond to several different underlying phonetic realizations. In this work,
we propose a joint model of both language-independent phone and
language-dependent phoneme distributions. In multilingual ASR experiments over
11 languages, we find that this model improves testing performance by 2%
phoneme error rate absolute in low-resource conditions. Additionally, because
we are explicitly modeling language-independent phones, we can build a
(nearly-)universal phone recognizer that, when combined with the PHOIBLE large,
manually curated database of phone inventories, can be customized into 2,000
language dependent recognizers. Experiments on two low-resourced indigenous
languages, Inuktitut and Tusom, show that our recognizer achieves phone
accuracy improvements of more than 17%, moving a step closer to speech
recognition for all languages in the world.Comment: ICASSP 202
Deciphering Speech: a Zero-Resource Approach to Cross-Lingual Transfer in ASR
We present a method for cross-lingual training an ASR system using absolutely
no transcribed training data from the target language, and with no phonetic
knowledge of the language in question. Our approach uses a novel application of
a decipherment algorithm, which operates given only unpaired speech and text
data from the target language. We apply this decipherment to phone sequences
generated by a universal phone recogniser trained on out-of-language speech
corpora, which we follow with flat-start semi-supervised training to obtain an
acoustic model for the new language. To the best of our knowledge, this is the
first practical approach to zero-resource cross-lingual ASR which does not rely
on any hand-crafted phonetic information. We carry out experiments on read
speech from the GlobalPhone corpus, and show that it is possible to learn a
decipherment model on just 20 minutes of data from the target language. When
used to generate pseudo-labels for semi-supervised training, we obtain WERs
that range from 32.5% to just 1.9% absolute worse than the equivalent fully
supervised models trained on the same data.Comment: Submitted to Interspeech 202
Configurable privacy-preserving automatic speech recognition
Voice assistive technologies have given rise to far-reaching privacy and security concerns. In this paper we investigate whether modular automatic speech recognition (ASR) can improve privacy in voice assistive systems by combining independently trained separation, recognition, and discretization modules to design configurable privacy-preserving ASR systems. We evaluate privacy concerns and the effects of applying various state-of-the-art techniques at each stage of the system, and report results using task-specific metrics (i.e. WER, ABX, and accuracy). We show that overlapping speech inputs to ASR systems present further privacy concerns, and how these may be mitigated using speech separation and optimization techniques. Our discretization module is shown to minimize paralinguistics privacy leakage from ASR acoustic models to levels commensurate with random guessing. We show that voice privacy can be configurable, and argue this presents new opportunities for privacy-preserving applications incorporating ASR
Cross-Lingual Transfer Learning Approach to Pronunciation Error Detection via Latent Phonetic Representation
Extensive research has been conducted on CALL systems for Pronunciation Error detection to automate language improvement through self-evaluation. However, many of these previous approaches have relied on HMM or Neural Network Hybrid Models which, although have proven to be effective, often utilize phonetically labelled L2 speech data which is expensive and often scarce. This paper discusses a âzero-shotâ transfer learning approach to detect phonetic errors in L2 English speech by Japanese native speakers using solely unaligned phonetically labelled native Language speech. The proposed method introduces a simple base architecture which utilizes the XLSR-Wav2Vec2.0 model pre-trained on unlabelled multilingual speech. Phoneme mapping for each language is determined based on difference of articulation of similar phonemes. This method achieved a Phonetic Error Rate of 0.214 on erroneous L2 speech after fine-tuning on 70 hours of speech with low resource automated phonetic labelling, and proved to additionally model phonemes of the native language of the L2 speaker effectively without the need for L2 speech fine-tuning
Automated speech tools for helping communities process restricted-access corpora for language revival efforts
Many archival recordings of speech from endangered languages remain unannotated and inaccessible to community members and language learning programs. One bottleneck is the time-intensive nature of annotation. An even narrower bottleneck occurs for recordings with access constraints, such as language that must be vetted or filtered by authorised community members before annotation can begin. We propose a privacy-preserving workflow to widen both bottlenecks for recordings where speech in the endangered language is intermixed with a more widely-used language such as English for meta-linguistic commentary and questions (e.g. What is the word for 'tree'?). We integrate voice activity detection (VAD), spoken language identification (SLI), and automatic speech recognition (ASR) to transcribe the metalinguistic content, which an authorised person can quickly scan to triage recordings that can be annotated by people with lower levels of access. We report work-in-progress processing 136 hours archival audio containing a mix of English and Muruwari. Our collaborative work with the Muruwari custodian of the archival materials show that this workflow reduces metalanguage transcription time by 20% even with minimal amounts of annotated training data: 10 utterances per language for SLI and for ASR at most 39 minutes, and possibly as little as 39 seconds.</p
- âŠ