In this paper, we explore an improved framework to train a monoaural neural
enhancement model for robust speech recognition. The designed training
framework extends the existing mixture invariant training criterion to exploit
both unpaired clean speech and real noisy data. It is found that the unpaired
clean speech is crucial to improve quality of separated speech from real noisy
speech. The proposed method also performs remixing of processed and unprocessed
signals to alleviate the processing artifacts. Experiments on the
single-channel CHiME-3 real test sets show that the proposed method improves
significantly in terms of speech recognition performance over the enhancement
system trained either on the mismatched simulated data in a supervised fashion
or on the matched real data in an unsupervised fashion. Between 16% and 39%
relative WER reduction has been achieved by the proposed system compared to the
unprocessed signal using end-to-end and hybrid acoustic models without
retraining on distorted data.Comment: Accepted to INTERSPEECH 202