Advancements in computer vision research have put transformer architecture as
the state of the art in computer vision tasks. One of the known drawbacks of
the transformer architecture is the high number of parameters, this can lead to
a more complex and inefficient algorithm. This paper aims to reduce the number
of parameters and in turn, made the transformer more efficient. We present
Sparse Transformer (SparTa) Block, a modified transformer block with an
addition of a sparse token converter that reduces the number of tokens used. We
use the SparTa Block inside the Swin T architecture (SparseSwin) to leverage
Swin capability to downsample its input and reduce the number of initial tokens
to be calculated. The proposed SparseSwin model outperforms other state of the
art models in image classification with an accuracy of 86.96%, 97.43%, and
85.35% on the ImageNet100, CIFAR10, and CIFAR100 datasets respectively. Despite
its fewer parameters, the result highlights the potential of a transformer
architecture using a sparse token converter with a limited number of tokens to
optimize the use of the transformer and improve its performance