4,625 research outputs found
Learning Active Basis Models by EM-Type Algorithms
EM algorithm is a convenient tool for maximum likelihood model fitting when
the data are incomplete or when there are latent variables or hidden states. In
this review article we explain that EM algorithm is a natural computational
scheme for learning image templates of object categories where the learning is
not fully supervised. We represent an image template by an active basis model,
which is a linear composition of a selected set of localized, elongated and
oriented wavelet elements that are allowed to slightly perturb their locations
and orientations to account for the deformations of object shapes. The model
can be easily learned when the objects in the training images are of the same
pose, and appear at the same location and scale. This is often called
supervised learning. In the situation where the objects may appear at different
unknown locations, orientations and scales in the training images, we have to
incorporate the unknown locations, orientations and scales as latent variables
into the image generation process, and learn the template by EM-type
algorithms. The E-step imputes the unknown locations, orientations and scales
based on the currently learned template. This step can be considered
self-supervision, which involves using the current template to recognize the
objects in the training images. The M-step then relearns the template based on
the imputed locations, orientations and scales, and this is essentially the
same as supervised learning. So the EM learning process iterates between
recognition and supervised learning. We illustrate this scheme by several
experiments.Comment: Published in at http://dx.doi.org/10.1214/09-STS281 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Likelihood Ratio-Based Detection of Facial Features
One of the first steps in face recognition, after image acquisition, is registration. A simple but effective technique of registration is to align facial features, such as eyes, nose and mouth, as well as possible to a standard face. This requires an accurate automatic estimate of the locations of those features. This contribution proposes a method for estimating the locations of facial features based on likelihood ratio-based detection. A post-processing step that evaluates the topology of the facial features is added to reduce the number of false detections. Although the individual detectors only have a reasonable performance (equal error rates range from 3.3% for the eyes to 1.0% for the nose), the positions of the facial features are estimated correctly in 95% of the face images
Learning Descriptors for Object Recognition and 3D Pose Estimation
Detecting poorly textured objects and estimating their 3D pose reliably is
still a very challenging problem. We introduce a simple but powerful approach
to computing descriptors for object views that efficiently capture both the
object identity and 3D pose. By contrast with previous manifold-based
approaches, we can rely on the Euclidean distance to evaluate the similarity
between descriptors, and therefore use scalable Nearest Neighbor search methods
to efficiently handle a large number of objects under a large range of poses.
To achieve this, we train a Convolutional Neural Network to compute these
descriptors by enforcing simple similarity and dissimilarity constraints
between the descriptors. We show that our constraints nicely untangle the
images from different objects and different views into clusters that are not
only well-separated but also structured as the corresponding sets of poses: The
Euclidean distance between descriptors is large when the descriptors are from
different objects, and directly related to the distance between the poses when
the descriptors are from the same object. These important properties allow us
to outperform state-of-the-art object views representations on challenging RGB
and RGB-D data.Comment: CVPR 201
A Survey on Joint Object Detection and Pose Estimation using Monocular Vision
In this survey we present a complete landscape of joint object detection and
pose estimation methods that use monocular vision. Descriptions of traditional
approaches that involve descriptors or models and various estimation methods
have been provided. These descriptors or models include chordiograms,
shape-aware deformable parts model, bag of boundaries, distance transform
templates, natural 3D markers and facet features whereas the estimation methods
include iterative clustering estimation, probabilistic networks and iterative
genetic matching. Hybrid approaches that use handcrafted feature extraction
followed by estimation by deep learning methods have been outlined. We have
investigated and compared, wherever possible, pure deep learning based
approaches (single stage and multi stage) for this problem. Comprehensive
details of the various accuracy measures and metrics have been illustrated. For
the purpose of giving a clear overview, the characteristics of relevant
datasets are discussed. The trends that prevailed from the infancy of this
problem until now have also been highlighted.Comment: Accepted at the International Joint Conference on Computer Vision and
Pattern Recognition (CCVPR) 201
Deformable Prototypes for Encoding Shape Categories in Image Databases
We describe a method for shape-based image database search that uses deformable prototypes to represent categories. Rather than directly comparing a candidate shape with all shape entries in the database, shapes are compared in terms of the types of nonrigid deformations (differences) that relate them to a small subset of representative prototypes. To solve the shape correspondence and alignment problem, we employ the technique of modal matching, an information-preserving shape decomposition for matching, describing, and comparing shapes despite sensor variations and nonrigid deformations. In modal matching, shape is decomposed into an ordered basis of orthogonal principal components. We demonstrate the utility of this approach for shape comparison in 2-D image databases.Office of Naval Research (Young Investigator Award N00014-06-1-0661
- …