1 research outputs found
Learning Transformation Synchronization
Reconstructing the 3D model of a physical object typically requires us to
align the depth scans obtained from different camera poses into the same
coordinate system. Solutions to this global alignment problem usually proceed
in two steps. The first step estimates relative transformations between pairs
of scans using an off-the-shelf technique. Due to limited information presented
between pairs of scans, the resulting relative transformations are generally
noisy. The second step then jointly optimizes the relative transformations
among all input depth scans. A natural constraint used in this step is the
cycle-consistency constraint, which allows us to prune incorrect relative
transformations by detecting inconsistent cycles. The performance of such
approaches, however, heavily relies on the quality of the input relative
transformations. Instead of merely using the relative transformations as the
input to perform transformation synchronization, we propose to use a neural
network to learn the weights associated with each relative transformation. Our
approach alternates between transformation synchronization using weighted
relative transformations and predicting new weights of the input relative
transformations using a neural network. We demonstrate the usefulness of this
approach across a wide range of datasets