11,222 research outputs found
Single camera pose estimation using Bayesian filtering and Kinect motion priors
Traditional approaches to upper body pose estimation using monocular vision
rely on complex body models and a large variety of geometric constraints. We
argue that this is not ideal and somewhat inelegant as it results in large
processing burdens, and instead attempt to incorporate these constraints
through priors obtained directly from training data. A prior distribution
covering the probability of a human pose occurring is used to incorporate
likely human poses. This distribution is obtained offline, by fitting a
Gaussian mixture model to a large dataset of recorded human body poses, tracked
using a Kinect sensor. We combine this prior information with a random walk
transition model to obtain an upper body model, suitable for use within a
recursive Bayesian filtering framework. Our model can be viewed as a mixture of
discrete Ornstein-Uhlenbeck processes, in that states behave as random walks,
but drift towards a set of typically observed poses. This model is combined
with measurements of the human head and hand positions, using recursive
Bayesian estimation to incorporate temporal information. Measurements are
obtained using face detection and a simple skin colour hand detector, trained
using the detected face. The suggested model is designed with analytical
tractability in mind and we show that the pose tracking can be
Rao-Blackwellised using the mixture Kalman filter, allowing for computational
efficiency while still incorporating bio-mechanical properties of the upper
body. In addition, the use of the proposed upper body model allows reliable
three-dimensional pose estimates to be obtained indirectly for a number of
joints that are often difficult to detect using traditional object recognition
strategies. Comparisons with Kinect sensor results and the state of the art in
2D pose estimation highlight the efficacy of the proposed approach.Comment: 25 pages, Technical report, related to Burke and Lasenby, AMDO 2014
conference paper. Code sample: https://github.com/mgb45/SignerBodyPose Video:
https://www.youtube.com/watch?v=dJMTSo7-uF
Key-Pose Prediction in Cyclic Human Motion
In this paper we study the problem of estimating innercyclic time intervals
within repetitive motion sequences of top-class swimmers in a swimming channel.
Interval limits are given by temporal occurrences of key-poses, i.e.
distinctive postures of the body. A key-pose is defined by means of only one or
two specific features of the complete posture. It is often difficult to detect
such subtle features directly. We therefore propose the following method: Given
that we observe the swimmer from the side, we build a pictorial structure of
poselets to robustly identify random support poses within the regular motion of
a swimmer. We formulate a maximum likelihood model which predicts a key-pose
given the occurrences of multiple support poses within one stroke. The maximum
likelihood can be extended with prior knowledge about the temporal location of
a key-pose in order to improve the prediction recall. We experimentally show
that our models reliably and robustly detect key-poses with a high precision
and that their performance can be improved by extending the framework with
additional camera views.Comment: Accepted at WACV 2015, 8 pages, 3 figure
GANerated Hands for Real-time 3D Hand Tracking from Monocular RGB
We address the highly challenging problem of real-time 3D hand tracking based
on a monocular RGB-only sequence. Our tracking method combines a convolutional
neural network with a kinematic 3D hand model, such that it generalizes well to
unseen data, is robust to occlusions and varying camera viewpoints, and leads
to anatomically plausible as well as temporally smooth hand motions. For
training our CNN we propose a novel approach for the synthetic generation of
training data that is based on a geometrically consistent image-to-image
translation network. To be more specific, we use a neural network that
translates synthetic images to "real" images, such that the so-generated images
follow the same statistical distribution as real-world hand images. For
training this translation network we combine an adversarial loss and a
cycle-consistency loss with a geometric consistency loss in order to preserve
geometric properties (such as hand pose) during translation. We demonstrate
that our hand tracking system outperforms the current state-of-the-art on
challenging RGB-only footage
Eye in the Sky: Real-time Drone Surveillance System (DSS) for Violent Individuals Identification using ScatterNet Hybrid Deep Learning Network
Drone systems have been deployed by various law enforcement agencies to
monitor hostiles, spy on foreign drug cartels, conduct border control
operations, etc. This paper introduces a real-time drone surveillance system to
identify violent individuals in public areas. The system first uses the Feature
Pyramid Network to detect humans from aerial images. The image region with the
human is used by the proposed ScatterNet Hybrid Deep Learning (SHDL) network
for human pose estimation. The orientations between the limbs of the estimated
pose are next used to identify the violent individuals. The proposed deep
network can learn meaningful representations quickly using ScatterNet and
structural priors with relatively fewer labeled examples. The system detects
the violent individuals in real-time by processing the drone images in the
cloud. This research also introduces the aerial violent individual dataset used
for training the deep network which hopefully may encourage researchers
interested in using deep learning for aerial surveillance. The pose estimation
and violent individuals identification performance is compared with the
state-of-the-art techniques.Comment: To Appear in the Efficient Deep Learning for Computer Vision (ECV)
workshop at IEEE Computer Vision and Pattern Recognition (CVPR) 2018. Youtube
demo at this: https://www.youtube.com/watch?v=zYypJPJipY
Video Object Segmentation Without Temporal Information
Video Object Segmentation, and video processing in general, has been
historically dominated by methods that rely on the temporal consistency and
redundancy in consecutive video frames. When the temporal smoothness is
suddenly broken, such as when an object is occluded, or some frames are missing
in a sequence, the result of these methods can deteriorate significantly or
they may not even produce any result at all. This paper explores the orthogonal
approach of processing each frame independently, i.e disregarding the temporal
information. In particular, it tackles the task of semi-supervised video object
segmentation: the separation of an object from the background in a video, given
its mask in the first frame. We present Semantic One-Shot Video Object
Segmentation (OSVOS-S), based on a fully-convolutional neural network
architecture that is able to successively transfer generic semantic
information, learned on ImageNet, to the task of foreground segmentation, and
finally to learning the appearance of a single annotated object of the test
sequence (hence one shot). We show that instance level semantic information,
when combined effectively, can dramatically improve the results of our previous
method, OSVOS. We perform experiments on two recent video segmentation
databases, which show that OSVOS-S is both the fastest and most accurate method
in the state of the art.Comment: Accepted to T-PAMI. Extended version of "One-Shot Video Object
Segmentation", CVPR 2017 (arXiv:1611.05198). Project page:
http://www.vision.ee.ethz.ch/~cvlsegmentation/osvos
RGB-D datasets using microsoft kinect or similar sensors: a survey
RGB-D data has turned out to be a very useful representation of an indoor scene for solving fundamental computer vision problems. It takes the advantages of the color image that provides appearance information of an object and also the depth image that is immune to the variations in color, illumination, rotation angle and scale. With the invention of the low-cost Microsoft Kinect sensor, which was initially used for gaming and later became a popular device for computer vision, high quality RGB-D data can be acquired easily. In recent years, more and more RGB-D image/video datasets dedicated to various applications have become available, which are of great importance to benchmark the state-of-the-art. In this paper, we systematically survey popular RGB-D datasets for different applications including object recognition, scene classification, hand gesture recognition, 3D-simultaneous localization and mapping, and pose estimation. We provide the insights into the characteristics of each important dataset, and compare the popularity and the difficulty of those datasets. Overall, the main goal of this survey is to give a comprehensive description about the available RGB-D datasets and thus to guide researchers in the selection of suitable datasets for evaluating their algorithms
- …
