8,937 research outputs found
Hand Keypoint Detection in Single Images using Multiview Bootstrapping
We present an approach that uses a multi-camera system to train fine-grained
detectors for keypoints that are prone to occlusion, such as the joints of a
hand. We call this procedure multiview bootstrapping: first, an initial
keypoint detector is used to produce noisy labels in multiple views of the
hand. The noisy detections are then triangulated in 3D using multiview geometry
or marked as outliers. Finally, the reprojected triangulations are used as new
labeled training data to improve the detector. We repeat this process,
generating more labeled data in each iteration. We derive a result analytically
relating the minimum number of views to achieve target true and false positive
rates for a given detector. The method is used to train a hand keypoint
detector for single images. The resulting keypoint detector runs in realtime on
RGB images and has accuracy comparable to methods that use depth sensors. The
single view detector, triangulated over multiple views, enables 3D markerless
hand motion capture with complex object interactions.Comment: CVPR 201
Learning Analysis-by-Synthesis for 6D Pose Estimation in RGB-D Images
Analysis-by-synthesis has been a successful approach for many tasks in
computer vision, such as 6D pose estimation of an object in an RGB-D image
which is the topic of this work. The idea is to compare the observation with
the output of a forward process, such as a rendered image of the object of
interest in a particular pose. Due to occlusion or complicated sensor noise, it
can be difficult to perform this comparison in a meaningful way. We propose an
approach that "learns to compare", while taking these difficulties into
account. This is done by describing the posterior density of a particular
object pose with a convolutional neural network (CNN) that compares an observed
and rendered image. The network is trained with the maximum likelihood
paradigm. We observe empirically that the CNN does not specialize to the
geometry or appearance of specific objects, and it can be used with objects of
vastly different shapes and appearances, and in different backgrounds. Compared
to state-of-the-art, we demonstrate a significant improvement on two different
datasets which include a total of eleven objects, cluttered background, and
heavy occlusion.Comment: 16 pages, 8 figure
3D Object Discovery and Modeling Using Single RGB-D Images Containing Multiple Object Instances
Unsupervised object modeling is important in robotics, especially for
handling a large set of objects. We present a method for unsupervised 3D object
discovery, reconstruction, and localization that exploits multiple instances of
an identical object contained in a single RGB-D image. The proposed method does
not rely on segmentation, scene knowledge, or user input, and thus is easily
scalable. Our method aims to find recurrent patterns in a single RGB-D image by
utilizing appearance and geometry of the salient regions. We extract keypoints
and match them in pairs based on their descriptors. We then generate triplets
of the keypoints matching with each other using several geometric criteria to
minimize false matches. The relative poses of the matched triplets are computed
and clustered to discover sets of triplet pairs with similar relative poses.
Triplets belonging to the same set are likely to belong to the same object and
are used to construct an initial object model. Detection of remaining instances
with the initial object model using RANSAC allows to further expand and refine
the model. The automatically generated object models are both compact and
descriptive. We show quantitative and qualitative results on RGB-D images with
various objects including some from the Amazon Picking Challenge. We also
demonstrate the use of our method in an object picking scenario with a robotic
arm
- …