1,133 research outputs found
MoreFusion: Multi-object Reasoning for 6D Pose Estimation from Volumetric Fusion
Robots and other smart devices need efficient object-based scene
representations from their on-board vision systems to reason about contact,
physics and occlusion. Recognized precise object models will play an important
role alongside non-parametric reconstructions of unrecognized structures. We
present a system which can estimate the accurate poses of multiple known
objects in contact and occlusion from real-time, embodied multi-view vision.
Our approach makes 3D object pose proposals from single RGB-D views,
accumulates pose estimates and non-parametric occupancy information from
multiple views as the camera moves, and performs joint optimization to estimate
consistent, non-intersecting poses for multiple objects in contact.
We verify the accuracy and robustness of our approach experimentally on 2
object datasets: YCB-Video, and our own challenging Cluttered YCB-Video. We
demonstrate a real-time robotics application where a robot arm precisely and
orderly disassembles complicated piles of objects, using only on-board RGB-D
vision.Comment: 10 pages, 10 figures, IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 202
XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera
We present a real-time approach for multi-person 3D motion capture at over 30
fps using a single RGB camera. It operates successfully in generic scenes which
may contain occlusions by objects and by other people. Our method operates in
subsequent stages. The first stage is a convolutional neural network (CNN) that
estimates 2D and 3D pose features along with identity assignments for all
visible joints of all individuals.We contribute a new architecture for this
CNN, called SelecSLS Net, that uses novel selective long and short range skip
connections to improve the information flow allowing for a drastically faster
network without compromising accuracy. In the second stage, a fully connected
neural network turns the possibly partial (on account of occlusion) 2Dpose and
3Dpose features for each subject into a complete 3Dpose estimate per
individual. The third stage applies space-time skeletal model fitting to the
predicted 2D and 3D pose per subject to further reconcile the 2D and 3D pose,
and enforce temporal coherence. Our method returns the full skeletal pose in
joint angles for each subject. This is a further key distinction from previous
work that do not produce joint angle results of a coherent skeleton in real
time for multi-person scenes. The proposed system runs on consumer hardware at
a previously unseen speed of more than 30 fps given 512x320 images as input
while achieving state-of-the-art accuracy, which we will demonstrate on a range
of challenging real-world scenes.Comment: To appear in ACM Transactions on Graphics (SIGGRAPH) 202
- …