44 research outputs found
Exploring RWKV for Memory Efficient and Low Latency Streaming ASR
Recently, self-attention-based transformers and conformers have been
introduced as alternatives to RNNs for ASR acoustic modeling. Nevertheless, the
full-sequence attention mechanism is non-streamable and computationally
expensive, thus requiring modifications, such as chunking and caching, for
efficient streaming ASR. In this paper, we propose to apply RWKV, a variant of
linear attention transformer, to streaming ASR. RWKV combines the superior
performance of transformers and the inference efficiency of RNNs, which is
well-suited for streaming ASR scenarios where the budget for latency and memory
is restricted. Experiments on varying scales (100h - 10000h) demonstrate that
RWKV-Transducer and RWKV-Boundary-Aware-Transducer achieve comparable to or
even better accuracy compared with chunk conformer transducer, with minimal
latency and inference memory cost.Comment: submitted to ICASSP 202
StreaMulT: Streaming Multimodal Transformer for Heterogeneous and Arbitrary Long Sequential Data
The increasing complexity of Industry 4.0 systems brings new challenges
regarding predictive maintenance tasks such as fault detection and diagnosis. A
corresponding and realistic setting includes multi-source data streams from
different modalities, such as sensors measurements time series, machine images,
textual maintenance reports, etc. These heterogeneous multimodal streams also
differ in their acquisition frequency, may embed temporally unaligned
information and can be arbitrarily long, depending on the considered system and
task. Whereas multimodal fusion has been largely studied in a static setting,
to the best of our knowledge, there exists no previous work considering
arbitrarily long multimodal streams alongside with related tasks such as
prediction across time. Thus, in this paper, we first formalize this paradigm
of heterogeneous multimodal learning in a streaming setting as a new one. To
tackle this challenge, we propose StreaMulT, a Streaming Multimodal Transformer
relying on cross-modal attention and on a memory bank to process arbitrarily
long input sequences at training time and run in a streaming way at inference.
StreaMulT improves the state-of-the-art metrics on CMU-MOSEI dataset for
Multimodal Sentiment Analysis task, while being able to deal with much longer
inputs than other multimodal models. The conducted experiments eventually
highlight the importance of the textual embedding layer, questioning recent
improvements in Multimodal Sentiment Analysis benchmarks.Comment: 11 pages, 6 figures, 3 table
Multiscale Attention via Wavelet Neural Operators for Vision Transformers
Transformers have achieved widespread success in computer vision. At their
heart, there is a Self-Attention (SA) mechanism, an inductive bias that
associates each token in the input with every other token through a weighted
basis. The standard SA mechanism has quadratic complexity with the sequence
length, which impedes its utility to long sequences appearing in high
resolution vision. Recently, inspired by operator learning for PDEs, Adaptive
Fourier Neural Operators (AFNO) were introduced for high resolution attention
based on global convolution that is efficiently implemented via FFT. However,
the AFNO global filtering cannot well represent small and moderate scale
structures that commonly appear in natural images. To leverage the
coarse-to-fine scale structures we introduce a Multiscale Wavelet Attention
(MWA) by leveraging wavelet neural operators which incurs linear complexity in
the sequence size. We replace the attention in ViT with MWA and our experiments
with CIFAR and ImageNet classification demonstrate significant improvement over
alternative Fourier-based attentions such as AFNO and Global Filter Network
(GFN)