5,999 research outputs found
Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition
We present a natural generalization of the recent low rank + sparse matrix
decomposition and consider the decomposition of matrices into components of
multiple scales. Such decomposition is well motivated in practice as data
matrices often exhibit local correlations in multiple scales. Concretely, we
propose a multi-scale low rank modeling that represents a data matrix as a sum
of block-wise low rank matrices with increasing scales of block sizes. We then
consider the inverse problem of decomposing the data matrix into its
multi-scale low rank components and approach the problem via a convex
formulation. Theoretically, we show that under various incoherence conditions,
the convex program recovers the multi-scale low rank components \revised{either
exactly or approximately}. Practically, we provide guidance on selecting the
regularization parameters and incorporate cycle spinning to reduce blocking
artifacts. Experimentally, we show that the multi-scale low rank decomposition
provides a more intuitive decomposition than conventional low rank methods and
demonstrate its effectiveness in four applications, including illumination
normalization for face images, motion separation for surveillance videos,
multi-scale modeling of the dynamic contrast enhanced magnetic resonance
imaging and collaborative filtering exploiting age information
Minimization of multi-penalty functionals by alternating iterative thresholding and optimal parameter choices
Inspired by several recent developments in regularization theory,
optimization, and signal processing, we present and analyze a numerical
approach to multi-penalty regularization in spaces of sparsely represented
functions. The sparsity prior is motivated by the largely expected
geometrical/structured features of high-dimensional data, which may not be
well-represented in the framework of typically more isotropic Hilbert spaces.
In this paper, we are particularly interested in regularizers which are able to
correctly model and separate the multiple components of additively mixed
signals. This situation is rather common as pure signals may be corrupted by
additive noise. To this end, we consider a regularization functional composed
by a data-fidelity term, where signal and noise are additively mixed, a
non-smooth and non-convex sparsity promoting term, and a penalty term to model
the noise. We propose and analyze the convergence of an iterative alternating
algorithm based on simple iterative thresholding steps to perform the
minimization of the functional. By means of this algorithm, we explore the
effect of choosing different regularization parameters and penalization norms
in terms of the quality of recovering the pure signal and separating it from
additive noise. For a given fixed noise level numerical experiments confirm a
significant improvement in performance compared to standard one-parameter
regularization methods. By using high-dimensional data analysis methods such as
Principal Component Analysis, we are able to show the correct geometrical
clustering of regularized solutions around the expected solution. Eventually,
for the compressive sensing problems considered in our experiments we provide a
guideline for a choice of regularization norms and parameters.Comment: 32 page
- …