Nowadays, transformer networks have demonstrated superior performance in many
computer vision tasks. In a multi-view 3D reconstruction algorithm following
this paradigm, self-attention processing has to deal with intricate image
tokens including massive information when facing heavy amounts of view input.
The curse of information content leads to the extreme difficulty of model
learning. To alleviate this problem, recent methods compress the token number
representing each view or discard the attention operations between the tokens
from different views. Obviously, they give a negative impact on performance.
Therefore, we propose long-range grouping attention (LGA) based on the
divide-and-conquer principle. Tokens from all views are grouped for separate
attention operations. The tokens in each group are sampled from all views and
can provide macro representation for the resided view. The richness of feature
learning is guaranteed by the diversity among different groups. An effective
and efficient encoder can be established which connects inter-view features
using LGA and extract intra-view features using the standard self-attention
layer. Moreover, a novel progressive upsampling decoder is also designed for
voxel generation with relatively high resolution. Hinging on the above, we
construct a powerful transformer-based network, called LRGT. Experimental
results on ShapeNet verify our method achieves SOTA accuracy in multi-view
reconstruction. Code will be available at
https://github.com/LiyingCV/Long-Range-Grouping-Transformer.Comment: Accepted to ICCV 202