1,335 research outputs found
The Conditional Lucas & Kanade Algorithm
The Lucas & Kanade (LK) algorithm is the method of choice for efficient dense
image and object alignment. The approach is efficient as it attempts to model
the connection between appearance and geometric displacement through a linear
relationship that assumes independence across pixel coordinates. A drawback of
the approach, however, is its generative nature. Specifically, its performance
is tightly coupled with how well the linear model can synthesize appearance
from geometric displacement, even though the alignment task itself is
associated with the inverse problem. In this paper, we present a new approach,
referred to as the Conditional LK algorithm, which: (i) directly learns linear
models that predict geometric displacement as a function of appearance, and
(ii) employs a novel strategy for ensuring that the generative pixel
independence assumption can still be taken advantage of. We demonstrate that
our approach exhibits superior performance to classical generative forms of the
LK algorithm. Furthermore, we demonstrate its comparable performance to
state-of-the-art methods such as the Supervised Descent Method with
substantially less training examples, as well as the unique ability to "swap"
geometric warp functions without having to retrain from scratch. Finally, from
a theoretical perspective, our approach hints at possible redundancies that
exist in current state-of-the-art methods for alignment that could be leveraged
in vision systems of the future.Comment: 17 pages, 11 figure
Deep-LK for Efficient Adaptive Object Tracking
In this paper we present a new approach for efficient regression based object
tracking which we refer to as Deep- LK. Our approach is closely related to the
Generic Object Tracking Using Regression Networks (GOTURN) framework of Held et
al. We make the following contributions. First, we demonstrate that there is a
theoretical relationship between siamese regression networks like GOTURN and
the classical Inverse-Compositional Lucas & Kanade (IC-LK) algorithm. Further,
we demonstrate that unlike GOTURN IC-LK adapts its regressor to the appearance
of the currently tracked frame. We argue that this missing property in GOTURN
can be attributed to its poor performance on unseen objects and/or viewpoints.
Second, we propose a novel framework for object tracking - which we refer to as
Deep-LK - that is inspired by the IC-LK framework. Finally, we show impressive
results demonstrating that Deep-LK substantially outperforms GOTURN.
Additionally, we demonstrate comparable tracking performance to current state
of the art deep-trackers whilst being an order of magnitude (i.e. 100 FPS)
computationally efficient
Planar Object Tracking in the Wild: A Benchmark
Planar object tracking is an actively studied problem in vision-based robotic
applications. While several benchmarks have been constructed for evaluating
state-of-the-art algorithms, there is a lack of video sequences captured in the
wild rather than in constrained laboratory environment. In this paper, we
present a carefully designed planar object tracking benchmark containing 210
videos of 30 planar objects sampled in the natural environment. In particular,
for each object, we shoot seven videos involving various challenging factors,
namely scale change, rotation, perspective distortion, motion blur, occlusion,
out-of-view, and unconstrained. The ground truth is carefully annotated
semi-manually to ensure the quality. Moreover, eleven state-of-the-art
algorithms are evaluated on the benchmark using two evaluation metrics, with
detailed analysis provided for the evaluation results. We expect the proposed
benchmark to benefit future studies on planar object tracking.Comment: Accepted by ICRA 201
Machine Analysis of Facial Expressions
No abstract
Unsupervised learning of human motion
An unsupervised learning algorithm that can obtain a probabilistic model of an object composed of a collection of parts (a moving human body in our examples) automatically from unlabeled training data is presented. The training data include both useful "foreground" features as well as features that arise from irrelevant background clutter - the correspondence between parts and detected features is unknown. The joint probability density function of the parts is represented by a mixture of decomposable triangulated graphs which allow for fast detection. To learn the model structure as well as model parameters, an EM-like algorithm is developed where the labeling of the data (part assignments) is treated as hidden variables. The unsupervised learning technique is not limited to decomposable triangulated graphs. The efficiency and effectiveness of our algorithm is demonstrated by applying it to generate models of human motion automatically from unlabeled image sequences, and testing the learned models on a variety of sequences
Learning Generative Models with Visual Attention
Attention has long been proposed by psychologists as important for
effectively dealing with the enormous sensory stimulus available in the
neocortex. Inspired by the visual attention models in computational
neuroscience and the need of object-centric data for generative models, we
describe for generative learning framework using attentional mechanisms.
Attentional mechanisms can propagate signals from region of interest in a scene
to an aligned canonical representation, where generative modeling takes place.
By ignoring background clutter, generative models can concentrate their
resources on the object of interest. Our model is a proper graphical model
where the 2D Similarity transformation is a part of the top-down process. A
ConvNet is employed to provide good initializations during posterior inference
which is based on Hamiltonian Monte Carlo. Upon learning images of faces, our
model can robustly attend to face regions of novel test subjects. More
importantly, our model can learn generative models of new faces from a novel
dataset of large images where the face locations are not known.Comment: In the proceedings of Neural Information Processing Systems, 201
Towards Visual Ego-motion Learning in Robots
Many model-based Visual Odometry (VO) algorithms have been proposed in the
past decade, often restricted to the type of camera optics, or the underlying
motion manifold observed. We envision robots to be able to learn and perform
these tasks, in a minimally supervised setting, as they gain more experience.
To this end, we propose a fully trainable solution to visual ego-motion
estimation for varied camera optics. We propose a visual ego-motion learning
architecture that maps observed optical flow vectors to an ego-motion density
estimate via a Mixture Density Network (MDN). By modeling the architecture as a
Conditional Variational Autoencoder (C-VAE), our model is able to provide
introspective reasoning and prediction for ego-motion induced scene-flow.
Additionally, our proposed model is especially amenable to bootstrapped
ego-motion learning in robots where the supervision in ego-motion estimation
for a particular camera sensor can be obtained from standard navigation-based
sensor fusion strategies (GPS/INS and wheel-odometry fusion). Through
experiments, we show the utility of our proposed approach in enabling the
concept of self-supervised learning for visual ego-motion estimation in
autonomous robots.Comment: Conference paper; Submitted to IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS) 2017, Vancouver CA; 8 pages, 8 figures,
2 table
- …