7,624 research outputs found
Bridging the Gap between Pre-Training and Fine-Tuning for End-to-End Speech Translation
End-to-end speech translation, a hot topic in recent years, aims to translate
a segment of audio into a specific language with an end-to-end model.
Conventional approaches employ multi-task learning and pre-training methods for
this task, but they suffer from the huge gap between pre-training and
fine-tuning. To address these issues, we propose a Tandem Connectionist
Encoding Network (TCEN) which bridges the gap by reusing all subnets in
fine-tuning, keeping the roles of subnets consistent, and pre-training the
attention module. Furthermore, we propose two simple but effective methods to
guarantee the speech encoder outputs and the MT encoder inputs are consistent
in terms of semantic representation and sequence length. Experimental results
show that our model outperforms baselines 2.2 BLEU on a large benchmark
dataset.Comment: AAAI202
Cold Fusion: Training Seq2Seq Models Together with Language Models
Sequence-to-sequence (Seq2Seq) models with attention have excelled at tasks
which involve generating natural language sentences such as machine
translation, image captioning and speech recognition. Performance has further
been improved by leveraging unlabeled data, often in the form of a language
model. In this work, we present the Cold Fusion method, which leverages a
pre-trained language model during training, and show its effectiveness on the
speech recognition task. We show that Seq2Seq models with Cold Fusion are able
to better utilize language information enjoying i) faster convergence and
better generalization, and ii) almost complete transfer to a new domain while
using less than 10% of the labeled training data
Modality Adaption or Regularization? A Case Study on End-to-End Speech Translation
Pre-training and fine-tuning is a paradigm for alleviating the data scarcity
problem in end-to-end speech translation (E2E ST). The commonplace "modality
gap" between speech and text data often leads to inconsistent inputs between
pre-training and fine-tuning. However, we observe that this gap occurs in the
early stages of fine-tuning, but does not have a major impact on the final
performance. On the other hand, we find that there has another gap, which we
call the "capacity gap": high resource tasks (such as ASR and MT) always
require a large model to fit, when the model is reused for a low resource task
(E2E ST), it will get a sub-optimal performance due to the over-fitting. In a
case study, we find that the regularization plays a more important role than
the well-designed modality adaption method, which achieves 29.0 for en-de and
40.3 for en-fr on the MuST-C dataset. Code and models are available at
https://github.com/hannlp/TAB.Comment: ACL 2023 Main Conferenc
ComSL: A Composite Speech-Language Model for End-to-End Speech-to-Text Translation
Joint speech-language training is challenging due to the large demand for
training data and GPU consumption, as well as the modality gap between speech
and language. We present ComSL, a speech-language model built atop a composite
architecture of public pretrained speech-only and language-only models and
optimized data-efficiently for spoken language tasks. Particularly, we propose
to incorporate cross-modality learning into transfer learning and conduct them
simultaneously for downstream tasks in a multi-task learning manner. Our
approach has demonstrated effectiveness in end-to-end speech-to-text
translation tasks, achieving a new state-of-the-art average BLEU score of 31.5
on the multilingual speech to English text translation task for 21 languages,
as measured on the public CoVoST2 evaluation set
WACO: Word-Aligned Contrastive Learning for Speech Translation
End-to-end Speech Translation (E2E ST) aims to directly translate source
speech into target text. Existing ST methods perform poorly when only extremely
small speech-text data are available for training. We observe that an ST
model's performance closely correlates with its embedding similarity between
speech and source transcript. In this paper, we propose Word-Aligned
COntrastive learning (WACO), a simple and effective method for extremely
low-resource speech-to-text translation. Our key idea is bridging word-level
representations for both speech and text modalities via contrastive learning.
We evaluate WACO and other methods on the MuST-C dataset, a widely used ST
benchmark, and on a low-resource direction Maltese-English from IWSLT 2023. Our
experiments demonstrate that WACO outperforms the best baseline by 9+ BLEU
points with only 1-hour parallel ST data. Code is available at
https://github.com/owaski/WACO.Comment: ACL 2023 Poste
- …