1 research outputs found
Reducing the gap between streaming and non-streaming Transducer-based ASR by adaptive two-stage knowledge distillation
Transducer is one of the mainstream frameworks for streaming speech
recognition. There is a performance gap between the streaming and non-streaming
transducer models due to limited context. To reduce this gap, an effective way
is to ensure that their hidden and output distributions are consistent, which
can be achieved by hierarchical knowledge distillation. However, it is
difficult to ensure the distribution consistency simultaneously because the
learning of the output distribution depends on the hidden one. In this paper,
we propose an adaptive two-stage knowledge distillation method consisting of
hidden layer learning and output layer learning. In the former stage, we learn
hidden representation with full context by applying mean square error loss
function. In the latter stage, we design a power transformation based adaptive
smoothness method to learn stable output distribution. It achieved 19\%
relative reduction in word error rate, and a faster response for the first
token compared with the original streaming model in LibriSpeech corpus