2 research outputs found
Multi-manifold Attention for Vision Transformers
Vision Transformers are very popular nowadays due to their state-of-the-art
performance in several computer vision tasks, such as image classification and
action recognition. Although their performance has been greatly enhanced
through highly descriptive patch embeddings and hierarchical structures, there
is still limited research on utilizing additional data representations so as to
refine the selfattention map of a Transformer. To address this problem, a novel
attention mechanism, called multi-manifold multihead attention, is proposed in
this work to substitute the vanilla self-attention of a Transformer. The
proposed mechanism models the input space in three distinct manifolds, namely
Euclidean, Symmetric Positive Definite and Grassmann, thus leveraging different
statistical and geometrical properties of the input for the computation of a
highly descriptive attention map. In this way, the proposed attention mechanism
can guide a Vision Transformer to become more attentive towards important
appearance, color and texture features of an image, leading to improved
classification and segmentation results, as shown by the experimental results
on well-known datasets.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessibl