Transformer based end-to-end modelling approaches with multiple stream inputs
have been achieved great success in various automatic speech recognition (ASR)
tasks. An important issue associated with such approaches is that the
intermediate features derived from each stream might have similar
representations and thus it is lacking of feature diversity, such as the
descriptions related to speaker characteristics. To address this issue, this
paper proposed a novel multi-level acoustic feature extraction framework that
can be easily combined with Transformer based ASR models. The framework
consists of two input streams: a shallow stream with high-resolution
spectrograms and a deep stream with low-resolution spectrograms. The shallow
stream is used to acquire traditional shallow features that is beneficial for
the classification of phones or words while the deep stream is used to obtain
utterance-level speaker-invariant deep features for improving the feature
diversity. A feature correlation based fusion strategy is used to aggregate
both features across the frequency and time domains and then fed into the
Transformer encoder-decoder module. By using the proposed multi-level acoustic
feature extraction framework, state-of-the-art word error rate of 21.7% and
2.5% were obtained on the HKUST Mandarin telephone and Librispeech speech
recognition tasks respectively.Comment: Accepted by Interspeech 202