2,344 research outputs found
DeepPose: Human Pose Estimation via Deep Neural Networks
We propose a method for human pose estimation based on Deep Neural Networks
(DNNs). The pose estimation is formulated as a DNN-based regression problem
towards body joints. We present a cascade of such DNN regressors which results
in high precision pose estimates. The approach has the advantage of reasoning
about pose in a holistic fashion and has a simple but yet powerful formulation
which capitalizes on recent advances in Deep Learning. We present a detailed
empirical analysis with state-of-art or better performance on four academic
benchmarks of diverse real-world images.Comment: IEEE Conference on Computer Vision and Pattern Recognition, 201
Generalized Kernel-based Visual Tracking
In this work we generalize the plain MS trackers and attempt to overcome
standard mean shift trackers' two limitations.
It is well known that modeling and maintaining a representation of a target
object is an important component of a successful visual tracker.
However, little work has been done on building a robust template model for
kernel-based MS tracking. In contrast to building a template from a single
frame, we train a robust object representation model from a large amount of
data. Tracking is viewed as a binary classification problem, and a
discriminative classification rule is learned to distinguish between the object
and background. We adopt a support vector machine (SVM) for training. The
tracker is then implemented by maximizing the classification score. An
iterative optimization scheme very similar to MS is derived for this purpose.Comment: 12 page
Efficient Object Localization Using Convolutional Networks
Recent state-of-the-art performance on human-body pose estimation has been
achieved with Deep Convolutional Networks (ConvNets). Traditional ConvNet
architectures include pooling and sub-sampling layers which reduce
computational requirements, introduce invariance and prevent over-training.
These benefits of pooling come at the cost of reduced localization accuracy. We
introduce a novel architecture which includes an efficient `position
refinement' model that is trained to estimate the joint offset location within
a small region of the image. This refinement model is jointly trained in
cascade with a state-of-the-art ConvNet model to achieve improved accuracy in
human joint location estimation. We show that the variance of our detector
approaches the variance of human annotations on the FLIC dataset and
outperforms all existing approaches on the MPII-human-pose dataset.Comment: 8 pages with 1 page of citation
- …