Transformers have achieved widespread success in computer vision. At their
heart, there is a Self-Attention (SA) mechanism, an inductive bias that
associates each token in the input with every other token through a weighted
basis. The standard SA mechanism has quadratic complexity with the sequence
length, which impedes its utility to long sequences appearing in high
resolution vision. Recently, inspired by operator learning for PDEs, Adaptive
Fourier Neural Operators (AFNO) were introduced for high resolution attention
based on global convolution that is efficiently implemented via FFT. However,
the AFNO global filtering cannot well represent small and moderate scale
structures that commonly appear in natural images. To leverage the
coarse-to-fine scale structures we introduce a Multiscale Wavelet Attention
(MWA) by leveraging wavelet neural operators which incurs linear complexity in
the sequence size. We replace the attention in ViT with MWA and our experiments
with CIFAR and ImageNet classification demonstrate significant improvement over
alternative Fourier-based attentions such as AFNO and Global Filter Network
(GFN)