42,926 research outputs found
Single camera pose estimation using Bayesian filtering and Kinect motion priors
Traditional approaches to upper body pose estimation using monocular vision
rely on complex body models and a large variety of geometric constraints. We
argue that this is not ideal and somewhat inelegant as it results in large
processing burdens, and instead attempt to incorporate these constraints
through priors obtained directly from training data. A prior distribution
covering the probability of a human pose occurring is used to incorporate
likely human poses. This distribution is obtained offline, by fitting a
Gaussian mixture model to a large dataset of recorded human body poses, tracked
using a Kinect sensor. We combine this prior information with a random walk
transition model to obtain an upper body model, suitable for use within a
recursive Bayesian filtering framework. Our model can be viewed as a mixture of
discrete Ornstein-Uhlenbeck processes, in that states behave as random walks,
but drift towards a set of typically observed poses. This model is combined
with measurements of the human head and hand positions, using recursive
Bayesian estimation to incorporate temporal information. Measurements are
obtained using face detection and a simple skin colour hand detector, trained
using the detected face. The suggested model is designed with analytical
tractability in mind and we show that the pose tracking can be
Rao-Blackwellised using the mixture Kalman filter, allowing for computational
efficiency while still incorporating bio-mechanical properties of the upper
body. In addition, the use of the proposed upper body model allows reliable
three-dimensional pose estimates to be obtained indirectly for a number of
joints that are often difficult to detect using traditional object recognition
strategies. Comparisons with Kinect sensor results and the state of the art in
2D pose estimation highlight the efficacy of the proposed approach.Comment: 25 pages, Technical report, related to Burke and Lasenby, AMDO 2014
conference paper. Code sample: https://github.com/mgb45/SignerBodyPose Video:
https://www.youtube.com/watch?v=dJMTSo7-uF
Rule Of Thumb: Deep derotation for improved fingertip detection
We investigate a novel global orientation regression approach for articulated
objects using a deep convolutional neural network. This is integrated with an
in-plane image derotation scheme, DeROT, to tackle the problem of per-frame
fingertip detection in depth images. The method reduces the complexity of
learning in the space of articulated poses which is demonstrated by using two
distinct state-of-the-art learning based hand pose estimation methods applied
to fingertip detection. Significant classification improvements are shown over
the baseline implementation. Our framework involves no tracking, kinematic
constraints or explicit prior model of the articulated object in hand. To
support our approach we also describe a new pipeline for high accuracy magnetic
annotation and labeling of objects imaged by a depth camera.Comment: To be published in proceedings of BMVC 201
Motion from Fixation
We study the problem of estimating rigid motion from a sequence of monocular perspective images obtained by navigating around an object while fixating a particular feature point. The motivation comes from the mechanics of the buman eye, which either pursuits smoothly some fixation point in the scene, or "saccades" between different fixation points. In particular, we are interested in understanding whether fixation helps the process of estimating motion in the sense that it makes it more robust, better conditioned or simpler to solve.
We cast the problem in the framework of "dynamic epipolar geometry", and propose an implicit dynamical model for recursively estimating motion from fixation. This allows us to compare directly the quality of the estimates of motion obtained by imposing the fixation constraint, or by assuming a general rigid motion, simply by changing the geometry of the parameter space while maintaining the same structure of the recursive estimator. We also present a closed-form static solution from two views, and a recursive estimator of the absolute attitude between the viewer and the scene.
One important issue is how do the estimates degrade in presence of disturbances in the tracking procedure. We describe a simple fixation control that converges exponentially, which is complemented by a image shift-registration for achieving sub-pixel accuracy, and assess how small deviations from perfect tracking affect the estimates of motion
Tracking Target Signal Strengths on a Grid using Sparsity
Multi-target tracking is mainly challenged by the nonlinearity present in the
measurement equation, and the difficulty in fast and accurate data association.
To overcome these challenges, the present paper introduces a grid-based model
in which the state captures target signal strengths on a known spatial grid
(TSSG). This model leads to \emph{linear} state and measurement equations,
which bypass data association and can afford state estimation via
sparsity-aware Kalman filtering (KF). Leveraging the grid-induced sparsity of
the novel model, two types of sparsity-cognizant TSSG-KF trackers are
developed: one effects sparsity through -norm regularization, and the
other invokes sparsity as an extra measurement. Iterative extended KF and
Gauss-Newton algorithms are developed for reduced-complexity tracking, along
with accurate error covariance updates for assessing performance of the
resultant sparsity-aware state estimators. Based on TSSG state estimates, more
informative target position and track estimates can be obtained in a follow-up
step, ensuring that track association and position estimation errors do not
propagate back into TSSG state estimates. The novel TSSG trackers do not
require knowing the number of targets or their signal strengths, and exhibit
considerably lower complexity than the benchmark hidden Markov model filter,
especially for a large number of targets. Numerical simulations demonstrate
that sparsity-cognizant trackers enjoy improved root mean-square error
performance at reduced complexity when compared to their sparsity-agnostic
counterparts.Comment: Submitted to IEEE Trans. on Signal Processin
Motion Imitation Based on Sparsely Sampled Correspondence
Existing techniques for motion imitation often suffer a certain level of
latency due to their computational overhead or a large set of correspondence
samples to search. To achieve real-time imitation with small latency, we
present a framework in this paper to reconstruct motion on humanoids based on
sparsely sampled correspondence. The imitation problem is formulated as finding
the projection of a point from the configuration space of a human's poses into
the configuration space of a humanoid. An optimal projection is defined as the
one that minimizes a back-projected deviation among a group of candidates,
which can be determined in a very efficient way. Benefited from this
formulation, effective projections can be obtained by using sparse
correspondence. Methods for generating these sparse correspondence samples have
also been introduced. Our method is evaluated by applying the human's motion
captured by a RGB-D sensor to a humanoid in real-time. Continuous motion can be
realized and used in the example application of tele-operation.Comment: 8 pages, 8 figures, technical repor
- …