21,706 research outputs found
SFNet: Learning Object-aware Semantic Correspondence
We address the problem of semantic correspondence, that is, establishing a
dense flow field between images depicting different instances of the same
object or scene category. We propose to use images annotated with binary
foreground masks and subjected to synthetic geometric deformations to train a
convolutional neural network (CNN) for this task. Using these masks as part of
the supervisory signal offers a good compromise between semantic flow methods,
where the amount of training data is limited by the cost of manually selecting
point correspondences, and semantic alignment ones, where the regression of a
single global geometric transformation between images may be sensitive to
image-specific details such as background clutter. We propose a new CNN
architecture, dubbed SFNet, which implements this idea. It leverages a new and
differentiable version of the argmax function for end-to-end training, with a
loss that combines mask and flow consistency with smoothness terms.
Experimental results demonstrate the effectiveness of our approach, which
significantly outperforms the state of the art on standard benchmarks.Comment: cvpr 2019 oral pape
Real-time Monocular Object SLAM
We present a real-time object-based SLAM system that leverages the largest
object database to date. Our approach comprises two main components: 1) a
monocular SLAM algorithm that exploits object rigidity constraints to improve
the map and find its real scale, and 2) a novel object recognition algorithm
based on bags of binary words, which provides live detections with a database
of 500 3D objects. The two components work together and benefit each other: the
SLAM algorithm accumulates information from the observations of the objects,
anchors object features to especial map landmarks and sets constrains on the
optimization. At the same time, objects partially or fully located within the
map are used as a prior to guide the recognition algorithm, achieving higher
recall. We evaluate our proposal on five real environments showing improvements
on the accuracy of the map and efficiency with respect to other
state-of-the-art techniques
Skeleton Driven Non-rigid Motion Tracking and 3D Reconstruction
This paper presents a method which can track and 3D reconstruct the non-rigid
surface motion of human performance using a moving RGB-D camera. 3D
reconstruction of marker-less human performance is a challenging problem due to
the large range of articulated motions and considerable non-rigid deformations.
Current approaches use local optimization for tracking. These methods need many
iterations to converge and may get stuck in local minima during sudden
articulated movements. We propose a puppet model-based tracking approach using
skeleton prior, which provides a better initialization for tracking articulated
movements. The proposed approach uses an aligned puppet model to estimate
correct correspondences for human performance capture. We also contribute a
synthetic dataset which provides ground truth locations for frame-by-frame
geometry and skeleton joints of human subjects. Experimental results show that
our approach is more robust when faced with sudden articulated motions, and
provides better 3D reconstruction compared to the existing state-of-the-art
approaches.Comment: Accepted in DICTA 201
- …