5,694 research outputs found
Hallucinating optimal high-dimensional subspaces
Linear subspace representations of appearance variation are pervasive in
computer vision. This paper addresses the problem of robustly matching such
subspaces (computing the similarity between them) when they are used to
describe the scope of variations within sets of images of different (possibly
greatly so) scales. A naive solution of projecting the low-scale subspace into
the high-scale image space is described first and subsequently shown to be
inadequate, especially at large scale discrepancies. A successful approach is
proposed instead. It consists of (i) an interpolated projection of the
low-scale subspace into the high-scale space, which is followed by (ii) a
rotation of this initial estimate within the bounds of the imposed
``downsampling constraint''. The optimal rotation is found in the closed-form
which best aligns the high-scale reconstruction of the low-scale subspace with
the reference it is compared to. The method is evaluated on the problem of
matching sets of (i) face appearances under varying illumination and (ii)
object appearances under varying viewpoint, using two large data sets. In
comparison to the naive matching, the proposed algorithm is shown to greatly
increase the separation of between-class and within-class similarities, as well
as produce far more meaningful modes of common appearance on which the match
score is based.Comment: Pattern Recognition, 201
Robust Principal Component Analysis?
This paper is about a curious phenomenon. Suppose we have a data matrix,
which is the superposition of a low-rank component and a sparse component. Can
we recover each component individually? We prove that under some suitable
assumptions, it is possible to recover both the low-rank and the sparse
components exactly by solving a very convenient convex program called Principal
Component Pursuit; among all feasible decompositions, simply minimize a
weighted combination of the nuclear norm and of the L1 norm. This suggests the
possibility of a principled approach to robust principal component analysis
since our methodology and results assert that one can recover the principal
components of a data matrix even though a positive fraction of its entries are
arbitrarily corrupted. This extends to the situation where a fraction of the
entries are missing as well. We discuss an algorithm for solving this
optimization problem, and present applications in the area of video
surveillance, where our methodology allows for the detection of objects in a
cluttered background, and in the area of face recognition, where it offers a
principled way of removing shadows and specularities in images of faces
Toward Guaranteed Illumination Models for Non-Convex Objects
Illumination variation remains a central challenge in object detection and
recognition. Existing analyses of illumination variation typically pertain to
convex, Lambertian objects, and guarantee quality of approximation in an
average case sense. We show that it is possible to build V(vertex)-description
convex cone models with worst-case performance guarantees, for non-convex
Lambertian objects. Namely, a natural verification test based on the angle to
the constructed cone guarantees to accept any image which is sufficiently
well-approximated by an image of the object under some admissible lighting
condition, and guarantees to reject any image that does not have a sufficiently
good approximation. The cone models are generated by sampling point
illuminations with sufficient density, which follows from a new perturbation
bound for point images in the Lambertian model. As the number of point images
required for guaranteed verification may be large, we introduce a new
formulation for cone preserving dimensionality reduction, which leverages tools
from sparse and low-rank decomposition to reduce the complexity, while
controlling the approximation error with respect to the original cone
- …