13,449 research outputs found
Blind Compressed Sensing Over a Structured Union of Subspaces
This paper addresses the problem of simultaneous signal recovery and
dictionary learning based on compressive measurements. Multiple signals are
analyzed jointly, with multiple sensing matrices, under the assumption that the
unknown signals come from a union of a small number of disjoint subspaces. This
problem is important, for instance, in image inpainting applications, in which
the multiple signals are constituted by (incomplete) image patches taken from
the overall image. This work extends standard dictionary learning and
block-sparse dictionary optimization, by considering compressive measurements,
e.g., incomplete data). Previous work on blind compressed sensing is also
generalized by using multiple sensing matrices and relaxing some of the
restrictions on the learned dictionary. Drawing on results developed in the
context of matrix completion, it is proven that both the dictionary and signals
can be recovered with high probability from compressed measurements. The
solution is unique up to block permutations and invertible linear
transformations of the dictionary atoms. The recovery is contingent on the
number of measurements per signal and the number of signals being sufficiently
large; bounds are derived for these quantities. In addition, this paper
presents a computationally practical algorithm that performs dictionary
learning and signal recovery, and establishes conditions for its convergence to
a local optimum. Experimental results for image inpainting demonstrate the
capabilities of the method
Finding a low-rank basis in a matrix subspace
For a given matrix subspace, how can we find a basis that consists of
low-rank matrices? This is a generalization of the sparse vector problem. It
turns out that when the subspace is spanned by rank-1 matrices, the matrices
can be obtained by the tensor CP decomposition. For the higher rank case, the
situation is not as straightforward. In this work we present an algorithm based
on a greedy process applicable to higher rank problems. Our algorithm first
estimates the minimum rank by applying soft singular value thresholding to a
nuclear norm relaxation, and then computes a matrix with that rank using the
method of alternating projections. We provide local convergence results, and
compare our algorithm with several alternative approaches. Applications include
data compression beyond the classical truncated SVD, computing accurate
eigenvectors of a near-multiple eigenvalue, image separation and graph
Laplacian eigenproblems
Solving systems of phaseless equations via Kaczmarz methods: A proof of concept study
We study the Kaczmarz methods for solving systems of quadratic equations,
i.e., the generalized phase retrieval problem. The methods extend the Kaczmarz
methods for solving systems of linear equations by integrating a phase
selection heuristic in each iteration and overall have the same per iteration
computational complexity. Extensive empirical performance comparisons establish
the computational advantages of the Kaczmarz methods over other
state-of-the-art phase retrieval algorithms both in terms of the number of
measurements needed for successful recovery and in terms of computation time.
Preliminary convergence analysis is presented for the randomized Kaczmarz
methods
A Non-Local Structure Tensor Based Approach for Multicomponent Image Recovery Problems
Non-Local Total Variation (NLTV) has emerged as a useful tool in variational
methods for image recovery problems. In this paper, we extend the NLTV-based
regularization to multicomponent images by taking advantage of the Structure
Tensor (ST) resulting from the gradient of a multicomponent image. The proposed
approach allows us to penalize the non-local variations, jointly for the
different components, through various matrix norms with .
To facilitate the choice of the hyper-parameters, we adopt a constrained convex
optimization approach in which we minimize the data fidelity term subject to a
constraint involving the ST-NLTV regularization. The resulting convex
optimization problem is solved with a novel epigraphical projection method.
This formulation can be efficiently implemented thanks to the flexibility
offered by recent primal-dual proximal algorithms. Experiments are carried out
for multispectral and hyperspectral images. The results demonstrate the
interest of introducing a non-local structure tensor regularization and show
that the proposed approach leads to significant improvements in terms of
convergence speed over current state-of-the-art methods
Constrained Overcomplete Analysis Operator Learning for Cosparse Signal Modelling
We consider the problem of learning a low-dimensional signal model from a
collection of training samples. The mainstream approach would be to learn an
overcomplete dictionary to provide good approximations of the training samples
using sparse synthesis coefficients. This famous sparse model has a less well
known counterpart, in analysis form, called the cosparse analysis model. In
this new model, signals are characterised by their parsimony in a transformed
domain using an overcomplete (linear) analysis operator. We propose to learn an
analysis operator from a training corpus using a constrained optimisation
framework based on L1 optimisation. The reason for introducing a constraint in
the optimisation framework is to exclude trivial solutions. Although there is
no final answer here for which constraint is the most relevant constraint, we
investigate some conventional constraints in the model adaptation field and use
the uniformly normalised tight frame (UNTF) for this purpose. We then derive a
practical learning algorithm, based on projected subgradients and
Douglas-Rachford splitting technique, and demonstrate its ability to robustly
recover a ground truth analysis operator, when provided with a clean training
set, of sufficient size. We also find an analysis operator for images, using
some noisy cosparse signals, which is indeed a more realistic experiment. As
the derived optimisation problem is not a convex program, we often find a local
minimum using such variational methods. Some local optimality conditions are
derived for two different settings, providing preliminary theoretical support
for the well-posedness of the learning problem under appropriate conditions.Comment: 29 pages, 13 figures, accepted to be published in TS
- …