24 research outputs found
CUSIDE: Chunking, Simulating Future Context and Decoding for Streaming ASR
History and future contextual information are known to be important for
accurate acoustic modeling. However, acquiring future context brings latency
for streaming ASR. In this paper, we propose a new framework - Chunking,
Simulating Future Context and Decoding (CUSIDE) for streaming speech
recognition. A new simulation module is introduced to recursively simulate the
future contextual frames, without waiting for future context. The simulation
module is jointly trained with the ASR model using a self-supervised loss; the
ASR model is optimized with the usual ASR loss, e.g., CTC-CRF as used in our
experiments. Experiments show that, compared to using real future frames as
right context, using simulated future context can drastically reduce latency
while maintaining recognition accuracy. With CUSIDE, we obtain new
state-of-the-art streaming ASR results on the AISHELL-1 dataset.Comment: submitted to INTERSPEECH 202
End-to-End Simultaneous Speech Translation
Speech translation is the task of translating speech in one language to text or speech in another language, while simultaneous translation aims at lower translation latency by starting the translation before the speaker finishes a sentence. The combination of the two, simultaneous speech translation, can be applied in low latency scenarios such as live video caption translation and real-time interpretation.
This thesis will focus on an end-to-end or direct approach for simultaneous speech translation. We first define the task of simultaneous speech translation, including the challenges of the task and its evaluation metrics. We then progressly introduce our contributions to tackle the challenges. First, we proposed a novel simultaneous translation policy, mono- tonic multihead attention, for transformer models on text-to-text translation. Second, we investigate the issues and potential solutions when adapting text-to-text simultaneous policies to end-to-end speech-to-text translation models. Third, we introduced the augmented memory transformer encoder for simultaneous speech-to-text translation models for better computation efficiency. Fourth, we explore a direct simultaneous speech translation with variational monotonic multihead attention policy, based on recent speech-to-unit models. At the end, we provide some directions for potential future research
Self-Attention Transducers for End-to-End Speech Recognition
Recurrent neural network transducers (RNN-T) have been successfully applied
in end-to-end speech recognition. However, the recurrent structure makes it
difficult for parallelization . In this paper, we propose a self-attention
transducer (SA-T) for speech recognition. RNNs are replaced with self-attention
blocks, which are powerful to model long-term dependencies inside sequences and
able to be efficiently parallelized. Furthermore, a path-aware regularization
is proposed to assist SA-T to learn alignments and improve the performance.
Additionally, a chunk-flow mechanism is utilized to achieve online decoding.
All experiments are conducted on a Mandarin Chinese dataset AISHELL-1. The
results demonstrate that our proposed approach achieves a 21.3% relative
reduction in character error rate compared with the baseline RNN-T. In
addition, the SA-T with chunk-flow mechanism can perform online decoding with
only a little degradation of the performance
Fast-U2++: Fast and Accurate End-to-End Speech Recognition in Joint CTC/Attention Frames
Recently, the unified streaming and non-streaming two-pass (U2/U2++)
end-to-end model for speech recognition has shown great performance in terms of
streaming capability, accuracy and latency. In this paper, we present
fast-U2++, an enhanced version of U2++ to further reduce partial latency. The
core idea of fast-U2++ is to output partial results of the bottom layers in its
encoder with a small chunk, while using a large chunk in the top layers of its
encoder to compensate the performance degradation caused by the small chunk.
Moreover, we use knowledge distillation method to reduce the token emission
latency. We present extensive experiments on Aishell-1 dataset. Experiments and
ablation studies show that compared to U2++, fast-U2++ reduces model latency
from 320ms to 80ms, and achieves a character error rate (CER) of 5.06% with a
streaming setup.Comment: 5 pages, 3 figure