Vision Transformers have achieved great success in computer visions,
delivering exceptional performance across various tasks. However, their
inherent reliance on sequential input enforces the manual partitioning of
images into patch sequences, which disrupts the image's inherent structural and
semantic continuity. To handle this, we propose a novel Pattern Transformer
(Patternformer) to adaptively convert images to pattern sequences for
Transformer input. Specifically, we employ the Convolutional Neural Network to
extract various patterns from the input image, with each channel representing a
unique pattern that is fed into the succeeding Transformer as a visual token.
By enabling the network to optimize these patterns, each pattern concentrates
on its local region of interest, thereby preserving its intrinsic structural
and semantic information. Only employing the vanilla ResNet and Transformer, we
have accomplished state-of-the-art performance on CIFAR-10 and CIFAR-100, and
have achieved competitive results on ImageNet