36 research outputs found
Sparsity optimization and RRSP-based theory far l-bit compressive sensing
Due to the fact that only a few significant components can capture the key information of the signal, acquiring a sparse representation of the signal can be interpreted as finding a sparsest solution to an underdetermined system of linear equations. Theoretical results obtained from studying the sparsest solution to a system of linear equations provide the foundation for many practical problems in signal and image processing, sample theory, statistical and machine learning, and error correction.
The first contribution of this thesis is the development of sufficient conditions for the uniqueness of solutions of the partial l-minimization, where only a part of the solution is sparse. In particular, l-minimization is a special case of the partial l-minimization. To study and develop uniqueness conditions for the partial sparsest solution, some concepts, such as l-induced quasi-norm, maximal scaled spark and maximal scaled mutual coherence, are introduced.
The main contribution of this thesis is the development of a framework for l-bit compressive sensing and the restricted range space property based support recovery theories. The l-bit compressive sensing is an extreme case of compressive sensing. We show that such a l-bit framework can be reformulated equivalently as an l-minimization with linear equality and inequality constraints. We establish a decoding method, so-called l-bit basis pursuit, to possibly attack this l-bit l-minimization problem. The support recovery theories via l-bit basis pursuit have been developed through the restricted range space property of transposed sensing matrices.
In the last part of this thesis, we study the numerical performance of l-bit basis pursuit. We present simulation results to demonstrate that l-bit basis pursuit achieves support recovery, approximate sparse recovery and cardinality recovery with Gaussian matrices and Bernoulli matrices. It is not necessary to require that the sensing matrix be underdetermined due to the single-bit per measurement assumption. Furthermore, we introduce the truncated l-bit measurements method and the reweighted l-bit l-minimization method to further enhance the numerical performance of l-bit basis pursuit
Linear Generalized Nash Equilibrium Problems
In der vorliegenden Arbeit werden verallgemeinerte Nash Spiele (LGNEPs) unter Linearitätsannahmen eingeführt und untersucht. Durch Ausnutzung der speziellen Struktur lassen sich theoretische und algorithmische Resultate erzielen, die weit über die Ergebnisse für allgemeine LGNEPs hinausgehen
The Convex Geometry of Linear Inverse Problems
In applications throughout science and engineering one is often faced with
the challenge of solving an ill-posed inverse problem, where the number of
available measurements is smaller than the dimension of the model to be
estimated. However in many practical situations of interest, models are
constrained structurally so that they only have a few degrees of freedom
relative to their ambient dimension. This paper provides a general framework to
convert notions of simplicity into convex penalty functions, resulting in
convex optimization solutions to linear, underdetermined inverse problems. The
class of simple models considered are those formed as the sum of a few atoms
from some (possibly infinite) elementary atomic set; examples include
well-studied cases such as sparse vectors and low-rank matrices, as well as
several others including sums of a few permutations matrices, low-rank tensors,
orthogonal matrices, and atomic measures. The convex programming formulation is
based on minimizing the norm induced by the convex hull of the atomic set; this
norm is referred to as the atomic norm. The facial structure of the atomic norm
ball carries a number of favorable properties that are useful for recovering
simple models, and an analysis of the underlying convex geometry provides sharp
estimates of the number of generic measurements required for exact and robust
recovery of models from partial information. These estimates are based on
computing the Gaussian widths of tangent cones to the atomic norm ball. When
the atomic set has algebraic structure the resulting optimization problems can
be solved or approximated via semidefinite programming. The quality of these
approximations affects the number of measurements required for recovery. Thus
this work extends the catalog of simple models that can be recovered from
limited linear information via tractable convex programming
Quantitative analysis of algorithms for compressed signal recovery
Compressed Sensing (CS) is an emerging paradigm in which signals are recovered from undersampled
nonadaptive linear measurements taken at a rate proportional to the signal's true
information content as opposed to its ambient dimension. The resulting problem consists in finding a sparse solution to an underdetermined system of linear equations. It has now been
established, both theoretically and empirically, that certain optimization algorithms are able
to solve such problems. Iterative Hard Thresholding (IHT) (Blumensath and Davies, 2007),
which is the focus of this thesis, is an established CS recovery algorithm which is known to
be effective in practice, both in terms of recovery performance and computational efficiency.
However, theoretical analysis of IHT to date suffers from two drawbacks: state-of-the-art worst-case
recovery conditions have not yet been quantified in terms of the sparsity/undersampling
trade-off, and also there is a need for average-case analysis in order to understand the behaviour
of the algorithm in practice.
In this thesis, we present a new recovery analysis of IHT, which considers the fixed points of
the algorithm. In the context of arbitrary matrices, we derive a condition guaranteeing convergence
of IHT to a fixed point, and a condition guaranteeing that all fixed points are 'close' to
the underlying signal. If both conditions are satisfied, signal recovery is therefore guaranteed.
Next, we analyse these conditions in the case of Gaussian measurement matrices, exploiting
the realistic average-case assumption that the underlying signal and measurement matrix are
independent. We obtain asymptotic phase transitions in a proportional-dimensional framework,
quantifying the sparsity/undersampling trade-off for which recovery is guaranteed. By generalizing
the notion of xed points, we extend our analysis to the variable stepsize Normalised IHT
(NIHT) (Blumensath and Davies, 2010). For both stepsize schemes, comparison with previous
results within this framework shows a substantial quantitative improvement.
We also extend our analysis to a related algorithm which exploits the assumption that the
underlying signal exhibits tree-structured sparsity in a wavelet basis (Baraniuk et al., 2010).
We obtain recovery conditions for Gaussian matrices in a simplified proportional-dimensional
asymptotic, deriving bounds on the oversampling rate relative to the sparsity for which recovery
is guaranteed. Our results, which are the first in the phase transition framework for tree-based
CS, show a further significant improvement over results for the standard sparsity model. We
also propose a dynamic programming algorithm which is guaranteed to compute an exact tree
projection in low-order polynomial time