7,326 research outputs found
Simple Baselines for Human Pose Estimation and Tracking
There has been significant progress on pose estimation and increasing
interests on pose tracking in recent years. At the same time, the overall
algorithm and system complexity increases as well, making the algorithm
analysis and comparison more difficult. This work provides simple and effective
baseline methods. They are helpful for inspiring and evaluating new ideas for
the field. State-of-the-art results are achieved on challenging benchmarks. The
code will be available at https://github.com/leoxiaobin/pose.pytorch.Comment: Accepted by ECCV 201
Multi-Domain Pose Network for Multi-Person Pose Estimation and Tracking
Multi-person human pose estimation and tracking in the wild is important and
challenging. For training a powerful model, large-scale training data are
crucial. While there are several datasets for human pose estimation, the best
practice for training on multi-dataset has not been investigated. In this
paper, we present a simple network called Multi-Domain Pose Network (MDPN) to
address this problem. By treating the task as multi-domain learning, our
methods can learn a better representation for pose prediction. Together with
prediction heads fine-tuning and multi-branch combination, it shows significant
improvement over baselines and achieves the best performance on PoseTrack ECCV
2018 Challenge without additional datasets other than MPII and COCO.Comment: Extended abstract for the ECCV 2018 PoseTrack Worksho
PoseTrack: A Benchmark for Human Pose Estimation and Tracking
Human poses and motions are important cues for analysis of videos with people
and there is strong evidence that representations based on body pose are highly
effective for a variety of tasks such as activity recognition, content
retrieval and social signal processing. In this work, we aim to further advance
the state of the art by establishing "PoseTrack", a new large-scale benchmark
for video-based human pose estimation and articulated tracking, and bringing
together the community of researchers working on visual human analysis. The
benchmark encompasses three competition tracks focusing on i) single-frame
multi-person pose estimation, ii) multi-person pose estimation in videos, and
iii) multi-person articulated tracking. To facilitate the benchmark and
challenge we collect, annotate and release a new %large-scale benchmark dataset
that features videos with multiple people labeled with person tracks and
articulated pose. A centralized evaluation server is provided to allow
participants to evaluate on a held-out test set. We envision that the proposed
benchmark will stimulate productive research both by providing a large and
representative training dataset as well as providing a platform to objectively
evaluate and compare the proposed methods. The benchmark is freely accessible
at https://posetrack.net.Comment: www.posetrack.ne
Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs
We address the problem of making human motion capture in the wild more
practical by using a small set of inertial sensors attached to the body. Since
the problem is heavily under-constrained, previous methods either use a large
number of sensors, which is intrusive, or they require additional video input.
We take a different approach and constrain the problem by: (i) making use of a
realistic statistical body model that includes anthropometric constraints and
(ii) using a joint optimization framework to fit the model to orientation and
acceleration measurements over multiple frames. The resulting tracker Sparse
Inertial Poser (SIP) enables 3D human pose estimation using only 6 sensors
(attached to the wrists, lower legs, back and head) and works for arbitrary
human motions. Experiments on the recently released TNT15 dataset show that,
using the same number of sensors, SIP achieves higher accuracy than the dataset
baseline without using any video data. We further demonstrate the effectiveness
of SIP on newly recorded challenging motions in outdoor scenarios such as
climbing or jumping over a wall.Comment: 12 pages, Accepted at Eurographics 201
Context-aware Human Motion Prediction
The problem of predicting human motion given a sequence of past observations
is at the core of many applications in robotics and computer vision. Current
state-of-the-art formulate this problem as a sequence-to-sequence task, in
which a historical of 3D skeletons feeds a Recurrent Neural Network (RNN) that
predicts future movements, typically in the order of 1 to 2 seconds. However,
one aspect that has been obviated so far, is the fact that human motion is
inherently driven by interactions with objects and/or other humans in the
environment. In this paper, we explore this scenario using a novel
context-aware motion prediction architecture. We use a semantic-graph model
where the nodes parameterize the human and objects in the scene and the edges
their mutual interactions. These interactions are iteratively learned through a
graph attention layer, fed with the past observations, which now include both
object and human body motions. Once this semantic graph is learned, we inject
it to a standard RNN to predict future movements of the human/s and object/s.
We consider two variants of our architecture, either freezing the contextual
interactions in the future of updating them. A thorough evaluation in the
"Whole-Body Human Motion Database" shows that in both cases, our context-aware
networks clearly outperform baselines in which the context information is not
considered.Comment: Accepted at CVPR2
- …