7,730 research outputs found
CLEAR: Covariant LEAst-square Re-fitting with applications to image restoration
In this paper, we propose a new framework to remove parts of the systematic
errors affecting popular restoration algorithms, with a special focus for image
processing tasks. Generalizing ideas that emerged for regularization,
we develop an approach re-fitting the results of standard methods towards the
input data. Total variation regularizations and non-local means are special
cases of interest. We identify important covariant information that should be
preserved by the re-fitting method, and emphasize the importance of preserving
the Jacobian (w.r.t. the observed signal) of the original estimator. Then, we
provide an approach that has a "twicing" flavor and allows re-fitting the
restored signal by adding back a local affine transformation of the residual
term. We illustrate the benefits of our method on numerical simulations for
image restoration tasks
Sparse Subspace Clustering: Algorithm, Theory, and Applications
In many real-world problems, we are dealing with collections of
high-dimensional data, such as images, videos, text and web documents, DNA
microarray data, and more. Often, high-dimensional data lie close to
low-dimensional structures corresponding to several classes or categories the
data belongs to. In this paper, we propose and study an algorithm, called
Sparse Subspace Clustering (SSC), to cluster data points that lie in a union of
low-dimensional subspaces. The key idea is that, among infinitely many possible
representations of a data point in terms of other points, a sparse
representation corresponds to selecting a few points from the same subspace.
This motivates solving a sparse optimization program whose solution is used in
a spectral clustering framework to infer the clustering of data into subspaces.
Since solving the sparse optimization program is in general NP-hard, we
consider a convex relaxation and show that, under appropriate conditions on the
arrangement of subspaces and the distribution of data, the proposed
minimization program succeeds in recovering the desired sparse representations.
The proposed algorithm can be solved efficiently and can handle data points
near the intersections of subspaces. Another key advantage of the proposed
algorithm with respect to the state of the art is that it can deal with data
nuisances, such as noise, sparse outlying entries, and missing entries,
directly by incorporating the model of the data into the sparse optimization
program. We demonstrate the effectiveness of the proposed algorithm through
experiments on synthetic data as well as the two real-world problems of motion
segmentation and face clustering
Robust Structured Low-Rank Approximation on the Grassmannian
Over the past years Robust PCA has been established as a standard tool for
reliable low-rank approximation of matrices in the presence of outliers.
Recently, the Robust PCA approach via nuclear norm minimization has been
extended to matrices with linear structures which appear in applications such
as system identification and data series analysis. At the same time it has been
shown how to control the rank of a structured approximation via matrix
factorization approaches. The drawbacks of these methods either lie in the lack
of robustness against outliers or in their static nature of repeated
batch-processing. We present a Robust Structured Low-Rank Approximation method
on the Grassmannian that on the one hand allows for fast re-initialization in
an online setting due to subspace identification with manifolds, and that is
robust against outliers due to a smooth approximation of the -norm cost
function on the other hand. The method is evaluated in online time series
forecasting tasks on simulated and real-world data
Diagonal and Low-Rank Matrix Decompositions, Correlation Matrices, and Ellipsoid Fitting
In this paper we establish links between, and new results for, three problems
that are not usually considered together. The first is a matrix decomposition
problem that arises in areas such as statistical modeling and signal
processing: given a matrix formed as the sum of an unknown diagonal matrix
and an unknown low rank positive semidefinite matrix, decompose into these
constituents. The second problem we consider is to determine the facial
structure of the set of correlation matrices, a convex set also known as the
elliptope. This convex body, and particularly its facial structure, plays a
role in applications from combinatorial optimization to mathematical finance.
The third problem is a basic geometric question: given points
(where ) determine whether there is a centered
ellipsoid passing \emph{exactly} through all of the points.
We show that in a precise sense these three problems are equivalent.
Furthermore we establish a simple sufficient condition on a subspace that
ensures any positive semidefinite matrix with column space can be
recovered from for any diagonal matrix using a convex
optimization-based heuristic known as minimum trace factor analysis. This
result leads to a new understanding of the structure of rank-deficient
correlation matrices and a simple condition on a set of points that ensures
there is a centered ellipsoid passing through them.Comment: 20 page
- …