63,260 research outputs found
Subspace procrustes analysis
Postprint (author's final draft
Convolutional neural network architecture for geometric matching
We address the problem of determining correspondences between two images in
agreement with a geometric model such as an affine or thin-plate spline
transformation, and estimating its parameters. The contributions of this work
are three-fold. First, we propose a convolutional neural network architecture
for geometric matching. The architecture is based on three main components that
mimic the standard steps of feature extraction, matching and simultaneous
inlier detection and model parameter estimation, while being trainable
end-to-end. Second, we demonstrate that the network parameters can be trained
from synthetically generated imagery without the need for manual annotation and
that our matching layer significantly increases generalization capabilities to
never seen before images. Finally, we show that the same model can perform both
instance-level and category-level matching giving state-of-the-art results on
the challenging Proposal Flow dataset.Comment: In 2017 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR 2017
Calipso: Physics-based Image and Video Editing through CAD Model Proxies
We present Calipso, an interactive method for editing images and videos in a
physically-coherent manner. Our main idea is to realize physics-based
manipulations by running a full physics simulation on proxy geometries given by
non-rigidly aligned CAD models. Running these simulations allows us to apply
new, unseen forces to move or deform selected objects, change physical
parameters such as mass or elasticity, or even add entire new objects that
interact with the rest of the underlying scene. In Calipso, the user makes
edits directly in 3D; these edits are processed by the simulation and then
transfered to the target 2D content using shape-to-image correspondences in a
photo-realistic rendering process. To align the CAD models, we introduce an
efficient CAD-to-image alignment procedure that jointly minimizes for rigid and
non-rigid alignment while preserving the high-level structure of the input
shape. Moreover, the user can choose to exploit image flow to estimate scene
motion, producing coherent physical behavior with ambient dynamics. We
demonstrate Calipso's physics-based editing on a wide range of examples
producing myriad physical behavior while preserving geometric and visual
consistency.Comment: 11 page
Multiframe Scene Flow with Piecewise Rigid Motion
We introduce a novel multiframe scene flow approach that jointly optimizes
the consistency of the patch appearances and their local rigid motions from
RGB-D image sequences. In contrast to the competing methods, we take advantage
of an oversegmentation of the reference frame and robust optimization
techniques. We formulate scene flow recovery as a global non-linear least
squares problem which is iteratively solved by a damped Gauss-Newton approach.
As a result, we obtain a qualitatively new level of accuracy in RGB-D based
scene flow estimation which can potentially run in real-time. Our method can
handle challenging cases with rigid, piecewise rigid, articulated and moderate
non-rigid motion, and does not rely on prior knowledge about the types of
motions and deformations. Extensive experiments on synthetic and real data show
that our method outperforms state-of-the-art.Comment: International Conference on 3D Vision (3DV), Qingdao, China, October
201
Multiframe Scene Flow with Piecewise Rigid Motion
We introduce a novel multiframe scene flow approach that jointly optimizes
the consistency of the patch appearances and their local rigid motions from
RGB-D image sequences. In contrast to the competing methods, we take advantage
of an oversegmentation of the reference frame and robust optimization
techniques. We formulate scene flow recovery as a global non-linear least
squares problem which is iteratively solved by a damped Gauss-Newton approach.
As a result, we obtain a qualitatively new level of accuracy in RGB-D based
scene flow estimation which can potentially run in real-time. Our method can
handle challenging cases with rigid, piecewise rigid, articulated and moderate
non-rigid motion, and does not rely on prior knowledge about the types of
motions and deformations. Extensive experiments on synthetic and real data show
that our method outperforms state-of-the-art.Comment: International Conference on 3D Vision (3DV), Qingdao, China, October
201
Deformable GANs for Pose-based Human Image Generation
In this paper we address the problem of generating person images conditioned
on a given pose. Specifically, given an image of a person and a target pose, we
synthesize a new image of that person in the novel pose. In order to deal with
pixel-to-pixel misalignments caused by the pose differences, we introduce
deformable skip connections in the generator of our Generative Adversarial
Network. Moreover, a nearest-neighbour loss is proposed instead of the common
L1 and L2 losses in order to match the details of the generated image with the
target image. We test our approach using photos of persons in different poses
and we compare our method with previous work in this area showing
state-of-the-art results in two benchmarks. Our method can be applied to the
wider field of deformable object generation, provided that the pose of the
articulated object can be extracted using a keypoint detector.Comment: CVPR 2018 versio
- …