16,621 research outputs found
Sequential Compressed Sensing
Compressed sensing allows perfect recovery of sparse signals (or signals
sparse in some basis) using only a small number of random measurements.
Existing results in compressed sensing literature have focused on
characterizing the achievable performance by bounding the number of samples
required for a given level of signal sparsity. However, using these bounds to
minimize the number of samples requires a-priori knowledge of the sparsity of
the unknown signal, or the decay structure for near-sparse signals.
Furthermore, there are some popular recovery methods for which no such bounds
are known.
In this paper, we investigate an alternative scenario where observations are
available in sequence. For any recovery method, this means that there is now a
sequence of candidate reconstructions. We propose a method to estimate the
reconstruction error directly from the samples themselves, for every candidate
in this sequence. This estimate is universal in the sense that it is based only
on the measurement ensemble, and not on the recovery method or any assumed
level of sparsity of the unknown signal. With these estimates, one can now stop
observations as soon as there is reasonable certainty of either exact or
sufficiently accurate reconstruction. They also provide a way to obtain
"run-time" guarantees for recovery methods that otherwise lack a-priori
performance bounds.
We investigate both continuous (e.g. Gaussian) and discrete (e.g. Bernoulli)
random measurement ensembles, both for exactly sparse and general near-sparse
signals, and with both noisy and noiseless measurements.Comment: to appear in IEEE transactions on Special Topics in Signal Processin
Sparse and Unique Nonnegative Matrix Factorization Through Data Preprocessing
Nonnegative matrix factorization (NMF) has become a very popular technique in
machine learning because it automatically extracts meaningful features through
a sparse and part-based representation. However, NMF has the drawback of being
highly ill-posed, that is, there typically exist many different but equivalent
factorizations. In this paper, we introduce a completely new way to obtaining
more well-posed NMF problems whose solutions are sparser. Our technique is
based on the preprocessing of the nonnegative input data matrix, and relies on
the theory of M-matrices and the geometric interpretation of NMF. This approach
provably leads to optimal and sparse solutions under the separability
assumption of Donoho and Stodden (NIPS, 2003), and, for rank-three matrices,
makes the number of exact factorizations finite. We illustrate the
effectiveness of our technique on several image datasets.Comment: 34 pages, 11 figure
Stable, Robust and Super Fast Reconstruction of Tensors Using Multi-Way Projections
In the framework of multidimensional Compressed Sensing (CS), we introduce an
analytical reconstruction formula that allows one to recover an th-order
data tensor
from a reduced set of multi-way compressive measurements by exploiting its low
multilinear-rank structure. Moreover, we show that, an interesting property of
multi-way measurements allows us to build the reconstruction based on
compressive linear measurements taken only in two selected modes, independently
of the tensor order . In addition, it is proved that, in the matrix case and
in a particular case with rd-order tensors where the same 2D sensor operator
is applied to all mode-3 slices, the proposed reconstruction
is stable in the sense that the approximation
error is comparable to the one provided by the best low-multilinear-rank
approximation, where is a threshold parameter that controls the
approximation error. Through the analysis of the upper bound of the
approximation error we show that, in the 2D case, an optimal value for the
threshold parameter exists, which is confirmed by our
simulation results. On the other hand, our experiments on 3D datasets show that
very good reconstructions are obtained using , which means that this
parameter does not need to be tuned. Our extensive simulation results
demonstrate the stability and robustness of the method when it is applied to
real-world 2D and 3D signals. A comparison with state-of-the-arts sparsity
based CS methods specialized for multidimensional signals is also included. A
very attractive characteristic of the proposed method is that it provides a
direct computation, i.e. it is non-iterative in contrast to all existing
sparsity based CS algorithms, thus providing super fast computations, even for
large datasets.Comment: Submitted to IEEE Transactions on Signal Processin
Faster Inversion and Other Black Box Matrix Computations Using Efficient Block Projections
Block projections have been used, in [Eberly et al. 2006], to obtain an
efficient algorithm to find solutions for sparse systems of linear equations. A
bound of softO(n^(2.5)) machine operations is obtained assuming that the input
matrix can be multiplied by a vector with constant-sized entries in softO(n)
machine operations. Unfortunately, the correctness of this algorithm depends on
the existence of efficient block projections, and this has been conjectured. In
this paper we establish the correctness of the algorithm from [Eberly et al.
2006] by proving the existence of efficient block projections over sufficiently
large fields. We demonstrate the usefulness of these projections by deriving
improved bounds for the cost of several matrix problems, considering, in
particular, ``sparse'' matrices that can be be multiplied by a vector using
softO(n) field operations. We show how to compute the inverse of a sparse
matrix over a field F using an expected number of softO(n^(2.27)) operations in
F. A basis for the null space of a sparse matrix, and a certification of its
rank, are obtained at the same cost. An application to Kaltofen and Villard's
Baby-Steps/Giant-Steps algorithms for the determinant and Smith Form of an
integer matrix yields algorithms requiring softO(n^(2.66)) machine operations.
The derived algorithms are all probabilistic of the Las Vegas type
Pairwise MRF Calibration by Perturbation of the Bethe Reference Point
We investigate different ways of generating approximate solutions to the
pairwise Markov random field (MRF) selection problem. We focus mainly on the
inverse Ising problem, but discuss also the somewhat related inverse Gaussian
problem because both types of MRF are suitable for inference tasks with the
belief propagation algorithm (BP) under certain conditions. Our approach
consists in to take a Bethe mean-field solution obtained with a maximum
spanning tree (MST) of pairwise mutual information, referred to as the
\emph{Bethe reference point}, for further perturbation procedures. We consider
three different ways following this idea: in the first one, we select and
calibrate iteratively the optimal links to be added starting from the Bethe
reference point; the second one is based on the observation that the natural
gradient can be computed analytically at the Bethe point; in the third one,
assuming no local field and using low temperature expansion we develop a dual
loop joint model based on a well chosen fundamental cycle basis. We indeed
identify a subclass of planar models, which we refer to as \emph{Bethe-dual
graph models}, having possibly many loops, but characterized by a singly
connected dual factor graph, for which the partition function and the linear
response can be computed exactly in respectively O(N) and operations,
thanks to a dual weight propagation (DWP) message passing procedure that we set
up. When restricted to this subclass of models, the inverse Ising problem being
convex, becomes tractable at any temperature. Experimental tests on various
datasets with refined or regularization procedures indicate that
these approaches may be competitive and useful alternatives to existing ones.Comment: 54 pages, 8 figure. section 5 and refs added in V
- …