1,962 research outputs found
Fine-tuning on Clean Data for End-to-End Speech Translation: FBK @ IWSLT 2018
This paper describes FBK's submission to the end-to-end English-German speech
translation task at IWSLT 2018. Our system relies on a state-of-the-art model
based on LSTMs and CNNs, where the CNNs are used to reduce the temporal
dimension of the audio input, which is in general much higher than machine
translation input. Our model was trained only on the audio-to-text parallel
data released for the task, and fine-tuned on cleaned subsets of the original
training corpus. The addition of weight normalization and label smoothing
improved the baseline system by 1.0 BLEU point on our validation set. The final
submission also featured checkpoint averaging within a training run and
ensemble decoding of models trained during multiple runs. On test data, our
best single model obtained a BLEU score of 9.7, while the ensemble obtained a
BLEU score of 10.24.Comment: 6 pages, 2 figures, system description at the 15th International
Workshop on Spoken Language Translation (IWSLT) 201
Automatic Transcription of Northern Prinmi Oral Art: Approaches and Challenges to Automatic Speech Recognition for Language Documentation
One significant issue facing language documentation efforts is the transcription bottleneck: each documented recording must be transcribed and annotated, and these tasks are extremely labor intensive (Ćavar et al., 2016). Researchers have sought to accelerate these tasks with partial automation via forced alignment, natural language processing, and automatic speech recognition (ASR) (Neubig et al., 2020). Neural network—especially transformer-based—approaches have enabled large advances in ASR over the last decade. Models like XLSR-53 promise improved performance on under-resourced languages by leveraging massive data sets from many different languages (Conneau et al., 2020). This project extends these efforts to a novel context, applying XLSR-53 to Northern Prinmi, a Tibeto-Burman Qiangic language spoken in Southwest China (Daudey & Pincuo, 2020).
Specifically, this thesis aims to answer two questions. First, is the XLSR-53 ASR model useful for first-pass transcription of oral art recordings from Northern Prinmi, an under-resourced tonal language? Second, does preprocessing target transcripts to combine grapheme clusters—multi-character representations of lexical tones and characters with modifying diacritics—into more phonologically salient units improve the model\u27s predictions? Results indicate that—with substantial adaptations—XLSR-53 will be useful for this task, and that preprocessing to combine grapheme clusters does improve model performance
Investigating Language Impact in Bilingual Approaches for Computational Language Documentation
For endangered languages, data collection campaigns have to accommodate the
challenge that many of them are from oral tradition, and producing
transcriptions is costly. Therefore, it is fundamental to translate them into a
widely spoken language to ensure interpretability of the recordings. In this
paper we investigate how the choice of translation language affects the
posterior documentation work and potential automatic approaches which will work
on top of the produced bilingual corpus. For answering this question, we use
the MaSS multilingual speech corpus (Boito et al., 2020) for creating 56
bilingual pairs that we apply to the task of low-resource unsupervised word
segmentation and alignment. Our results highlight that the choice of language
for translation influences the word segmentation performance, and that
different lexicons are learned by using different aligned translations. Lastly,
this paper proposes a hybrid approach for bilingual word segmentation,
combining boundary clues extracted from a non-parametric Bayesian model
(Goldwater et al., 2009a) with the attentional word segmentation neural model
from Godard et al. (2018). Our results suggest that incorporating these clues
into the neural models' input representation increases their translation and
alignment quality, specially for challenging language pairs.Comment: Accepted to 1st Joint SLTU and CCURL Worksho
- …