In recent years, many video tasks have achieved breakthroughs by utilizing
the vision transformer and establishing spatial-temporal decoupling for feature
extraction. Although multi-view 3D reconstruction also faces multiple images as
input, it cannot immediately inherit their success due to completely ambiguous
associations between unstructured views. There is not usable prior
relationship, which is similar to the temporally-coherence property in a video.
To solve this problem, we propose a novel transformer network for Unstructured
Multiple Images (UMIFormer). It exploits transformer blocks for decoupled
intra-view encoding and designed blocks for token rectification that mine the
correlation between similar tokens from different views to achieve decoupled
inter-view encoding. Afterward, all tokens acquired from various branches are
compressed into a fixed-size compact representation while preserving rich
information for reconstruction by leveraging the similarities between tokens.
We empirically demonstrate on ShapeNet and confirm that our decoupled learning
method is adaptable for unstructured multiple images. Meanwhile, the
experiments also verify our model outperforms existing SOTA methods by a large
margin. Code will be available at https://github.com/GaryZhu1996/UMIFormer.Comment: Accepted by ICCV 202