Automatic speech recognition models are often adapted to improve their
accuracy in a new domain. A potential drawback of model adaptation to new
domains is catastrophic forgetting, where the Word Error Rate on the original
domain is significantly degraded. This paper addresses the situation when we
want to simultaneously adapt automatic speech recognition models to a new
domain and limit the degradation of accuracy on the original domain without
access to the original training dataset. We propose several techniques such as
a limited training strategy and regularized adapter modules for the Transducer
encoder, prediction, and joiner network. We apply these methods to the Google
Speech Commands and to the UK and Ireland English Dialect speech data set and
obtain strong results on the new target domain while limiting the degradation
on the original domain.Comment: To appear in Proc. SLT 2022, Jan 09-12, 2023, Doha, Qata