76,287 research outputs found
Simultaneously Structured Models with Application to Sparse and Low-rank Matrices
The topic of recovery of a structured model given a small number of linear
observations has been well-studied in recent years. Examples include recovering
sparse or group-sparse vectors, low-rank matrices, and the sum of sparse and
low-rank matrices, among others. In various applications in signal processing
and machine learning, the model of interest is known to be structured in
several ways at the same time, for example, a matrix that is simultaneously
sparse and low-rank.
Often norms that promote each individual structure are known, and allow for
recovery using an order-wise optimal number of measurements (e.g.,
norm for sparsity, nuclear norm for matrix rank). Hence, it is reasonable to
minimize a combination of such norms. We show that, surprisingly, if we use
multi-objective optimization with these norms, then we can do no better,
order-wise, than an algorithm that exploits only one of the present structures.
This result suggests that to fully exploit the multiple structures, we need an
entirely new convex relaxation, i.e. not one that is a function of the convex
relaxations used for each structure. We then specialize our results to the case
of sparse and low-rank matrices. We show that a nonconvex formulation of the
problem can recover the model from very few measurements, which is on the order
of the degrees of freedom of the matrix, whereas the convex problem obtained
from a combination of the and nuclear norms requires many more
measurements. This proves an order-wise gap between the performance of the
convex and nonconvex recovery problems in this case. Our framework applies to
arbitrary structure-inducing norms as well as to a wide range of measurement
ensembles. This allows us to give performance bounds for problems such as
sparse phase retrieval and low-rank tensor completion.Comment: 38 pages, 9 figure
Construction of a Large Class of Deterministic Sensing Matrices that Satisfy a Statistical Isometry Property
Compressed Sensing aims to capture attributes of -sparse signals using
very few measurements. In the standard Compressed Sensing paradigm, the
\m\times \n measurement matrix \A is required to act as a near isometry on
the set of all -sparse signals (Restricted Isometry Property or RIP).
Although it is known that certain probabilistic processes generate \m \times
\n matrices that satisfy RIP with high probability, there is no practical
algorithm for verifying whether a given sensing matrix \A has this property,
crucial for the feasibility of the standard recovery algorithms. In contrast
this paper provides simple criteria that guarantee that a deterministic sensing
matrix satisfying these criteria acts as a near isometry on an overwhelming
majority of -sparse signals; in particular, most such signals have a unique
representation in the measurement domain. Probability still plays a critical
role, but it enters the signal model rather than the construction of the
sensing matrix. We require the columns of the sensing matrix to form a group
under pointwise multiplication. The construction allows recovery methods for
which the expected performance is sub-linear in \n, and only quadratic in
\m; the focus on expected performance is more typical of mainstream signal
processing than the worst-case analysis that prevails in standard Compressed
Sensing. Our framework encompasses many families of deterministic sensing
matrices, including those formed from discrete chirps, Delsarte-Goethals codes,
and extended BCH codes.Comment: 16 Pages, 2 figures, to appear in IEEE Journal of Selected Topics in
Signal Processing, the special issue on Compressed Sensin
Efficient Low Rank Matrix Recovery With Flexible Group Sparse Regularization
In this paper, we present a novel approach to the low rank matrix recovery
(LRMR) problem by casting it as a group sparsity problem. Specifically, we
propose a flexible group sparse regularizer (FLGSR) that can group any number
of matrix columns as a unit, whereas existing methods group each column as a
unit. We prove the equivalence between the matrix rank and the FLGSR under some
mild conditions, and show that the LRMR problem with either of them has the
same global minimizers. We also establish the equivalence between the relaxed
and the penalty formulations of the LRMR problem with FLGSR. We then propose an
inexact restarted augmented Lagrangian method, which solves each subproblem by
an extrapolated linearized alternating minimization method. We analyze the
convergence of our method. Remarkably, our method linearizes each group of the
variable separately and uses the information of the previous groups to solve
the current group within the same iteration step. This strategy enables our
algorithm to achieve fast convergence and high performance, which are further
improved by the restart technique. Finally, we conduct numerical experiments on
both grayscale images and high altitude aerial images to confirm the
superiority of the proposed FLGSR and algorithm
Sensing Matrix Design and Sparse Recovery on the Sphere and the Rotation Group
In this paper, {the goal is to design deterministic sampling patterns on the
sphere and the rotation group} and, thereby, construct sensing matrices for
sparse recovery of band-limited functions. It is first shown that random
sensing matrices, which consists of random samples of Wigner D-functions,
satisfy the Restricted Isometry Property (RIP) with proper preconditioning and
can be used for sparse recovery on the rotation group. The mutual coherence,
however, is used to assess the performance of deterministic and regular sensing
matrices. We show that many of widely used regular sampling patterns yield
sensing matrices with the worst possible mutual coherence, and therefore are
undesirable for sparse recovery. Using tools from angular momentum analysis in
quantum mechanics, we provide a new expression for the mutual coherence, which
encourages the use of regular elevation samples. We construct low coherence
deterministic matrices by fixing the regular samples on the elevation and
minimizing the mutual coherence over the azimuth-polarization choice. It is
shown that once the elevation sampling is fixed, the mutual coherence has a
lower bound that depends only on the elevation samples. This lower bound,
however, can be achieved for spherical harmonics, which leads to new sensing
matrices with better coherence than other representative regular sampling
patterns. This is reflected as well in our numerical experiments where our
proposed sampling patterns perfectly match the phase transition of random
sampling patterns.Comment: IEEE Trans. on Signal Processin
Support Constrained Generator Matrices of Gabidulin Codes in Characteristic Zero
Gabidulin codes over fields of characteristic zero were recently constructed by Augot et al., whenever the Galois group of the underlying field extension is cyclic. In parallel, the interest in sparse generator matrices of Reed–Solomon and Gabidulin codes has increased lately, due to applications in distributed computations. In particular, a certain condition pertaining to the intersection of zero entries at different rows, was shown to be necessary and sufficient for the existence of the sparsest possible generator matrix of Gabidulin codes over finite fields. In this paper we complete the picture by showing that the same condition is also necessary and sufficient for Gabidulin codes over fields of characteristic zero.Our proof builds upon and extends tools from the finite-field case, combines them with a variant of the Schwartz–Zippel lemma over automorphisms, and provides a simple randomized construction algorithm whose probability of success can be arbitrarily close to one. In addition, potential applications for low-rank matrix recovery are discussed
- …