4,187 research outputs found
Learning to Translate in Real-time with Neural Machine Translation
Translating in real-time, a.k.a. simultaneous translation, outputs
translation words before the input sentence ends, which is a challenging
problem for conventional machine translation methods. We propose a neural
machine translation (NMT) framework for simultaneous translation in which an
agent learns to make decisions on when to translate from the interaction with a
pre-trained NMT environment. To trade off quality and delay, we extensively
explore various targets for delay and design a method for beam-search
applicable in the simultaneous MT setting. Experiments against state-of-the-art
baselines on two language pairs demonstrate the efficacy of the proposed
framework both quantitatively and qualitatively.Comment: 10 pages, camera read
Deep attractor network for single-microphone speaker separation
Despite the overwhelming success of deep learning in various speech
processing tasks, the problem of separating simultaneous speakers in a mixture
remains challenging. Two major difficulties in such systems are the arbitrary
source permutation and unknown number of sources in the mixture. We propose a
novel deep learning framework for single channel speech separation by creating
attractor points in high dimensional embedding space of the acoustic signals
which pull together the time-frequency bins corresponding to each source.
Attractor points in this study are created by finding the centroids of the
sources in the embedding space, which are subsequently used to determine the
similarity of each bin in the mixture to each source. The network is then
trained to minimize the reconstruction error of each source by optimizing the
embeddings. The proposed model is different from prior works in that it
implements an end-to-end training, and it does not depend on the number of
sources in the mixture. Two strategies are explored in the test time, K-means
and fixed attractor points, where the latter requires no post-processing and
can be implemented in real-time. We evaluated our system on Wall Street Journal
dataset and show 5.49\% improvement over the previous state-of-the-art methods.Comment: 2017 IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP
Towards Stream Translation: Adaptive Computation Time for Simultaneous Machine Translation
Simultaneous machine translation systems rely on a policy to schedule read and write operations in order to begin translating a source sentence before it is complete. In this paper, we demonstrate the use of Adaptive Computation Time (ACT) as an adaptive, learned policy for simultaneous machine translation using the transformer model and as a more numerically stable alternative to Monotonic Infinite Lookback Attention (MILk). We achieve state-of-the-art results in terms of latency-quality tradeoffs. We also propose a method to use our model on unsegmented input, i.e. without sentence boundaries, simulating the condition of translating output from automatic speech recognition. We present first benchmark results on this task
ON-TRAC Consortium for End-to-End and Simultaneous Speech Translation Challenge Tasks at IWSLT 2020
This paper describes the ON-TRAC Consortium translation systems developed for
two challenge tracks featured in the Evaluation Campaign of IWSLT 2020, offline
speech translation and simultaneous speech translation. ON-TRAC Consortium is
composed of researchers from three French academic laboratories: LIA (Avignon
Universit\'e), LIG (Universit\'e Grenoble Alpes), and LIUM (Le Mans
Universit\'e). Attention-based encoder-decoder models, trained end-to-end, were
used for our submissions to the offline speech translation track. Our
contributions focused on data augmentation and ensembling of multiple models.
In the simultaneous speech translation track, we build on Transformer-based
wait-k models for the text-to-text subtask. For speech-to-text simultaneous
translation, we attach a wait-k MT system to a hybrid ASR system. We propose an
algorithm to control the latency of the ASR+MT cascade and achieve a good
latency-quality trade-off on both subtasks
End-to-End Simultaneous Speech Translation with Differentiable Segmentation
End-to-end simultaneous speech translation (SimulST) outputs translation
while receiving the streaming speech inputs (a.k.a. streaming speech
translation), and hence needs to segment the speech inputs and then translate
based on the current received speech. However, segmenting the speech inputs at
unfavorable moments can disrupt the acoustic integrity and adversely affect the
performance of the translation model. Therefore, learning to segment the speech
inputs at those moments that are beneficial for the translation model to
produce high-quality translation is the key to SimulST. Existing SimulST
methods, either using the fixed-length segmentation or external segmentation
model, always separate segmentation from the underlying translation model,
where the gap results in segmentation outcomes that are not necessarily
beneficial for the translation process. In this paper, we propose
Differentiable Segmentation (DiSeg) for SimulST to directly learn segmentation
from the underlying translation model. DiSeg turns hard segmentation into
differentiable through the proposed expectation training, enabling it to be
jointly trained with the translation model and thereby learn
translation-beneficial segmentation. Experimental results demonstrate that
DiSeg achieves state-of-the-art performance and exhibits superior segmentation
capability.Comment: Accepted at ACL 2023 finding
- …