374 research outputs found
Improving End-to-End Speech Recognition with Policy Learning
Connectionist temporal classification (CTC) is widely used for maximum
likelihood learning in end-to-end speech recognition models. However, there is
usually a disparity between the negative maximum likelihood and the performance
metric used in speech recognition, e.g., word error rate (WER). This results in
a mismatch between the objective function and metric during training. We show
that the above problem can be mitigated by jointly training with maximum
likelihood and policy gradient. In particular, with policy learning we are able
to directly optimize on the (otherwise non-differentiable) performance metric.
We show that joint training improves relative performance by 4% to 13% for our
end-to-end model as compared to the same model learned through maximum
likelihood. The model achieves 5.53% WER on Wall Street Journal dataset, and
5.42% and 14.70% on Librispeech test-clean and test-other set, respectively
- …