28,228 research outputs found
End-to-end Recovery of Human Shape and Pose
We describe Human Mesh Recovery (HMR), an end-to-end framework for
reconstructing a full 3D mesh of a human body from a single RGB image. In
contrast to most current methods that compute 2D or 3D joint locations, we
produce a richer and more useful mesh representation that is parameterized by
shape and 3D joint angles. The main objective is to minimize the reprojection
loss of keypoints, which allow our model to be trained using images in-the-wild
that only have ground truth 2D annotations. However, the reprojection loss
alone leaves the model highly under constrained. In this work we address this
problem by introducing an adversary trained to tell whether a human body
parameter is real or not using a large database of 3D human meshes. We show
that HMR can be trained with and without using any paired 2D-to-3D supervision.
We do not rely on intermediate 2D keypoint detections and infer 3D pose and
shape parameters directly from image pixels. Our model runs in real-time given
a bounding box containing the person. We demonstrate our approach on various
images in-the-wild and out-perform previous optimization based methods that
output 3D meshes and show competitive results on tasks such as 3D joint
location estimation and part segmentation.Comment: CVPR 2018, Project page with code: https://akanazawa.github.io/hmr
V2V-PoseNet: Voxel-to-Voxel Prediction Network for Accurate 3D Hand and Human Pose Estimation from a Single Depth Map
Most of the existing deep learning-based methods for 3D hand and human pose
estimation from a single depth map are based on a common framework that takes a
2D depth map and directly regresses the 3D coordinates of keypoints, such as
hand or human body joints, via 2D convolutional neural networks (CNNs). The
first weakness of this approach is the presence of perspective distortion in
the 2D depth map. While the depth map is intrinsically 3D data, many previous
methods treat depth maps as 2D images that can distort the shape of the actual
object through projection from 3D to 2D space. This compels the network to
perform perspective distortion-invariant estimation. The second weakness of the
conventional approach is that directly regressing 3D coordinates from a 2D
image is a highly non-linear mapping, which causes difficulty in the learning
procedure. To overcome these weaknesses, we firstly cast the 3D hand and human
pose estimation problem from a single depth map into a voxel-to-voxel
prediction that uses a 3D voxelized grid and estimates the per-voxel likelihood
for each keypoint. We design our model as a 3D CNN that provides accurate
estimates while running in real-time. Our system outperforms previous methods
in almost all publicly available 3D hand and human pose estimation datasets and
placed first in the HANDS 2017 frame-based 3D hand pose estimation challenge.
The code is available in https://github.com/mks0601/V2V-PoseNet_RELEASE.Comment: HANDS 2017 Challenge Frame-based 3D Hand Pose Estimation Winner (ICCV
2017), Published at CVPR 201
Learning to Refine Human Pose Estimation
Multi-person pose estimation in images and videos is an important yet
challenging task with many applications. Despite the large improvements in
human pose estimation enabled by the development of convolutional neural
networks, there still exist a lot of difficult cases where even the
state-of-the-art models fail to correctly localize all body joints. This
motivates the need for an additional refinement step that addresses these
challenging cases and can be easily applied on top of any existing method. In
this work, we introduce a pose refinement network (PoseRefiner) which takes as
input both the image and a given pose estimate and learns to directly predict a
refined pose by jointly reasoning about the input-output space. In order for
the network to learn to refine incorrect body joint predictions, we employ a
novel data augmentation scheme for training, where we model "hard" human pose
cases. We evaluate our approach on four popular large-scale pose estimation
benchmarks such as MPII Single- and Multi-Person Pose Estimation, PoseTrack
Pose Estimation, and PoseTrack Pose Tracking, and report systematic improvement
over the state of the art.Comment: To appear in CVPRW (2018). Workshop: Visual Understanding of Humans
in Crowd Scene and the 2nd Look Into Person Challenge (VUHCS-LIP
3-D Hand Pose Estimation from Kinect's Point Cloud Using Appearance Matching
We present a novel appearance-based approach for pose estimation of a human
hand using the point clouds provided by the low-cost Microsoft Kinect sensor.
Both the free-hand case, in which the hand is isolated from the surrounding
environment, and the hand-object case, in which the different types of
interactions are classified, have been considered. The hand-object case is
clearly the most challenging task having to deal with multiple tracks. The
approach proposed here belongs to the class of partial pose estimation where
the estimated pose in a frame is used for the initialization of the next one.
The pose estimation is obtained by applying a modified version of the Iterative
Closest Point (ICP) algorithm to synthetic models to obtain the rigid
transformation that aligns each model with respect to the input data. The
proposed framework uses a "pure" point cloud as provided by the Kinect sensor
without any other information such as RGB values or normal vector components.
For this reason, the proposed method can also be applied to data obtained from
other types of depth sensor, or RGB-D camera
Predicting Out-of-View Feature Points for Model-Based Camera Pose Estimation
In this work we present a novel framework that uses deep learning to predict
object feature points that are out-of-view in the input image. This system was
developed with the application of model-based tracking in mind, particularly in
the case of autonomous inspection robots, where only partial views of the
object are available. Out-of-view prediction is enabled by applying scaling to
the feature point labels during network training. This is combined with a
recurrent neural network architecture designed to provide the final prediction
layers with rich feature information from across the spatial extent of the
input image. To show the versatility of these out-of-view predictions, we
describe how to integrate them in both a particle filter tracker and an
optimisation based tracker. To evaluate our work we compared our framework with
one that predicts only points inside the image. We show that as the amount of
the object in view decreases, being able to predict outside the image bounds
adds robustness to the final pose estimation.Comment: Submitted to IROS 201
- …