Enhancing automatic speech recognition (ASR) performance by leveraging
additional multimodal information has shown promising results in previous
studies. However, most of these works have primarily focused on utilizing
visual cues derived from human lip motions. In fact, context-dependent visual
and linguistic cues can also benefit in many scenarios. In this paper, we first
propose ViLaS (Vision and Language into Automatic Speech Recognition), a novel
multimodal ASR model based on the continuous integrate-and-fire (CIF)
mechanism, which can integrate visual and textual context simultaneously or
separately, to facilitate speech recognition. Next, we introduce an effective
training strategy that improves performance in modal-incomplete test scenarios.
Then, to explore the effects of integrating vision and language, we create
VSDial, a multimodal ASR dataset with multimodal context cues in both Chinese
and English versions. Finally, empirical results are reported on the public
Flickr8K and self-constructed VSDial datasets. We explore various cross-modal
fusion schemes, analyze fine-grained crossmodal alignment on VSDial, and
provide insights into the effects of integrating multimodal information on
speech recognition.Comment: Accepted to ICASSP 202