54,589 research outputs found
Combining Particle Filter and Population-based Metaheuristics for Visual Articulated Motion Tracking
Visual tracking of articulated motion is a complex task with high computational costs. Because of the fact that articulated objects are usually represented as a set of linked limbs, tracking is performed with the support of a model. Model-based tracking allows determining object pose in an effortless way and handling occlusions. However, the use of articulated models generates a multidimensional state-space and, therefore, the tracking becomes computationally very expensive or even infeasible. Due to the dynamic nature of the problem, some sequential estimation algorithms like particle filters are usually applied to visual tracking. Unfortunately, particle filter fails in high dimensional estimation problems such as articulated objects or multiple object tracking. These problems are called \emph{dynamic optimization problems}. Metaheuristics, which are high level general strategies for designing heuristics procedures, have emerged for solving many real world combinatorial problems as a way to efficiently and effectively exploring the problem search space. Path relinking (PR) and scatter search (SS) are evolutionary metaheuristics successfully applied to several hard optimization problems. PRPF and SSPF algorithms respectively hybridize both, particle filter and these two population-based metaheuristic schemes. In this paper, We present and compare two different hybrid algorithms called Path Relinking Particle Filter (PRPF) and Scatter Search Particle Filter (SSPF), applied to 2D human motion tracking. Experimental results show that the proposed algorithms increase the performance of standard particle filters
Combining Particle Filter and Population-based Metaheuristics for Visual Articulated Motion Tracking
Visual tracking of articulated motion is a complex task with high computational costs. Because of the fact that articulated objects are usually represented as a set of linked limbs, tracking is performed with the support of a model. Model-based tracking allows determining object pose in an effortless way and handling occlusions. However, the use of articulated models generates a multidimensional state-space and, therefore, the tracking becomes computationally very expensive or even infeasible. Due to the dynamic nature of the problem, some sequential estimation algorithms like particle filters are usually applied to visual tracking. Unfortunately, particle filter fails in high dimensional estimation problems such as articulated objects or multiple object tracking. These problems are called \emph{dynamic optimization problems}. Metaheuristics, which are high level general strategies for designing heuristics procedures, have emerged for solving many real world combinatorial problems as a way to efficiently and effectively exploring the problem search space. Path relinking (PR) and scatter search (SS) are evolutionary metaheuristics successfully applied to several hard optimization problems. PRPF and SSPF algorithms respectively hybridize both, particle filter and these two population-based metaheuristic schemes. In this paper, We present and compare two different hybrid algorithms called Path Relinking Particle Filter (PRPF) and Scatter Search Particle Filter (SSPF), applied to 2D human motion tracking. Experimental results show that the proposed algorithms increase the performance of standard particle filters
A Multi-body Tracking Framework -- From Rigid Objects to Kinematic Structures
Kinematic structures are very common in the real world. They range from
simple articulated objects to complex mechanical systems. However, despite
their relevance, most model-based 3D tracking methods only consider rigid
objects. To overcome this limitation, we propose a flexible framework that
allows the extension of existing 6DoF algorithms to kinematic structures. Our
approach focuses on methods that employ Newton-like optimization techniques,
which are widely used in object tracking. The framework considers both
tree-like and closed kinematic structures and allows a flexible configuration
of joints and constraints. To project equations from individual rigid bodies to
a multi-body system, Jacobians are used. For closed kinematic chains, a novel
formulation that features Lagrange multipliers is developed. In a detailed
mathematical proof, we show that our constraint formulation leads to an exact
kinematic solution and converges in a single iteration. Based on the proposed
framework, we extend ICG, which is a state-of-the-art rigid object tracking
algorithm, to multi-body tracking. For the evaluation, we create a
highly-realistic synthetic dataset that features a large number of sequences
and various robots. Based on this dataset, we conduct a wide variety of
experiments that demonstrate the excellent performance of the developed
framework and our multi-body tracker.Comment: Submitted to IEEE Transactions on Pattern Analysis and Machine
Intelligenc
Learning Articulated Motions From Visual Demonstration
Many functional elements of human homes and workplaces consist of rigid
components which are connected through one or more sliding or rotating
linkages. Examples include doors and drawers of cabinets and appliances;
laptops; and swivel office chairs. A robotic mobile manipulator would benefit
from the ability to acquire kinematic models of such objects from observation.
This paper describes a method by which a robot can acquire an object model by
capturing depth imagery of the object as a human moves it through its range of
motion. We envision that in future, a machine newly introduced to an
environment could be shown by its human user the articulated objects particular
to that environment, inferring from these "visual demonstrations" enough
information to actuate each object independently of the user.
Our method employs sparse (markerless) feature tracking, motion segmentation,
component pose estimation, and articulation learning; it does not require prior
object models. Using the method, a robot can observe an object being exercised,
infer a kinematic model incorporating rigid, prismatic and revolute joints,
then use the model to predict the object's motion from a novel vantage point.
We evaluate the method's performance, and compare it to that of a previously
published technique, for a variety of household objects.Comment: Published in Robotics: Science and Systems X, Berkeley, CA. ISBN:
978-0-9923747-0-
Skeleton Driven Non-rigid Motion Tracking and 3D Reconstruction
This paper presents a method which can track and 3D reconstruct the non-rigid
surface motion of human performance using a moving RGB-D camera. 3D
reconstruction of marker-less human performance is a challenging problem due to
the large range of articulated motions and considerable non-rigid deformations.
Current approaches use local optimization for tracking. These methods need many
iterations to converge and may get stuck in local minima during sudden
articulated movements. We propose a puppet model-based tracking approach using
skeleton prior, which provides a better initialization for tracking articulated
movements. The proposed approach uses an aligned puppet model to estimate
correct correspondences for human performance capture. We also contribute a
synthetic dataset which provides ground truth locations for frame-by-frame
geometry and skeleton joints of human subjects. Experimental results show that
our approach is more robust when faced with sudden articulated motions, and
provides better 3D reconstruction compared to the existing state-of-the-art
approaches.Comment: Accepted in DICTA 201
Skeleton-aided Articulated Motion Generation
This work make the first attempt to generate articulated human motion
sequence from a single image. On the one hand, we utilize paired inputs
including human skeleton information as motion embedding and a single human
image as appearance reference, to generate novel motion frames, based on the
conditional GAN infrastructure. On the other hand, a triplet loss is employed
to pursue appearance-smoothness between consecutive frames. As the proposed
framework is capable of jointly exploiting the image appearance space and
articulated/kinematic motion space, it generates realistic articulated motion
sequence, in contrast to most previous video generation methods which yield
blurred motion effects. We test our model on two human action datasets
including KTH and Human3.6M, and the proposed framework generates very
promising results on both datasets.Comment: ACM MM 201
- …