60,759 research outputs found
The Convex Geometry of Linear Inverse Problems
In applications throughout science and engineering one is often faced with
the challenge of solving an ill-posed inverse problem, where the number of
available measurements is smaller than the dimension of the model to be
estimated. However in many practical situations of interest, models are
constrained structurally so that they only have a few degrees of freedom
relative to their ambient dimension. This paper provides a general framework to
convert notions of simplicity into convex penalty functions, resulting in
convex optimization solutions to linear, underdetermined inverse problems. The
class of simple models considered are those formed as the sum of a few atoms
from some (possibly infinite) elementary atomic set; examples include
well-studied cases such as sparse vectors and low-rank matrices, as well as
several others including sums of a few permutations matrices, low-rank tensors,
orthogonal matrices, and atomic measures. The convex programming formulation is
based on minimizing the norm induced by the convex hull of the atomic set; this
norm is referred to as the atomic norm. The facial structure of the atomic norm
ball carries a number of favorable properties that are useful for recovering
simple models, and an analysis of the underlying convex geometry provides sharp
estimates of the number of generic measurements required for exact and robust
recovery of models from partial information. These estimates are based on
computing the Gaussian widths of tangent cones to the atomic norm ball. When
the atomic set has algebraic structure the resulting optimization problems can
be solved or approximated via semidefinite programming. The quality of these
approximations affects the number of measurements required for recovery. Thus
this work extends the catalog of simple models that can be recovered from
limited linear information via tractable convex programming
Novel min-max reformulations of Linear Inverse Problems
In this article, we dwell into the class of so-called ill-posed Linear
Inverse Problems (LIP) which simply refers to the task of recovering the entire
signal from its relatively few random linear measurements. Such problems arise
in a variety of settings with applications ranging from medical image
processing, recommender systems, etc. We propose a slightly generalized version
of the error constrained linear inverse problem and obtain a novel and
equivalent convex-concave min-max reformulation by providing an exposition to
its convex geometry. Saddle points of the min-max problem are completely
characterized in terms of a solution to the LIP, and vice versa. Applying
simple saddle point seeking ascend-descent type algorithms to solve the min-max
problems provides novel and simple algorithms to find a solution to the LIP.
Moreover, the reformulation of an LIP as the min-max problem provided in this
article is crucial in developing methods to solve the dictionary learning
problem with almost sure recovery constraints
Convergence of the forward-backward algorithm: beyond the worst-case with the help of geometry
We provide a comprehensive study of the convergence of the forward-backward algorithm under suitable geometric conditions, such as conditioning or Łojasiewicz properties. These geometrical notions are usually local by nature, and may fail to describe the fine geometry of objective
functions relevant in inverse problems and signal processing, that have a nice behaviour on manifolds, or sets open with respect to a weak topology. Motivated by this observation, we revisit those
geometric notions over arbitrary sets. In turn, this allows us to present several new results as well
as collect in a unified view a variety of results scattered in the literature. Our contributions include
the analysis of infinite dimensional convex minimization problems, showing the first Łojasiewicz
inequality for a quadratic function associated to a compact operator, and the derivation of new linear rates for problems arising from inverse problems with low-complexity priors. Our approach
allows to establish unexpected connections between geometry and a priori conditions in inverse
problems, such as source conditions, or restricted isometry properties
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Terracini Convexity
We present a generalization of the notion of neighborliness to non-polyhedral convex cones. Although a definition of neighborliness is available in the non-polyhedral case in the literature, it is fairly restrictive as it requires all the low-dimensional faces to be polyhedral. Our approach is more flexible and includes, for example, the cone of positive-semidefinite matrices as a special case (this cone is not neighborly in general). We term our generalization Terracini convexity due to its conceptual similarity with the conclusion of Terracini's lemma from algebraic geometry. Polyhedral cones are Terracini convex if and only if they are neighborly. More broadly, we derive many families of non-polyhedral Terracini convex cones based on neighborly cones, linear images of cones of positive semidefinite matrices, and derivative relaxations of Terracini convex hyperbolicity cones. As a demonstration of the utility of our framework in the non-polyhedral case, we give a characterization based on Terracini convexity of the tightness of semidefinite relaxations for certain inverse problems
Terracini Convexity
We present a generalization of the notion of neighborliness to non-polyhedral convex cones. Although a definition of neighborliness is available in the non-polyhedral case in the literature, it is fairly restrictive as it requires all the low-dimensional faces to be polyhedral. Our approach is more flexible and includes, for example, the cone of positive-semidefinite matrices as a special case (this cone is not neighborly in general). We term our generalization Terracini convexity due to its conceptual similarity with the conclusion of Terracini's lemma from algebraic geometry. Polyhedral cones are Terracini convex if and only if they are neighborly. More broadly, we derive many families of non-polyhedral Terracini convex cones based on neighborly cones, linear images of cones of positive semidefinite matrices, and derivative relaxations of Terracini convex hyperbolicity cones. As a demonstration of the utility of our framework in the non-polyhedral case, we give a characterization based on Terracini convexity of the tightness of semidefinite relaxations for certain inverse problems
Revisiting maximum-a-posteriori estimation in log-concave models
Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation
methodology in imaging sciences, where high dimensionality is often addressed
by using Bayesian models that are log-concave and whose posterior mode can be
computed efficiently by convex optimisation. Despite its success and wide
adoption, MAP estimation is not theoretically well understood yet. The
prevalent view in the community is that MAP estimation is not proper Bayesian
estimation in a decision-theoretic sense because it does not minimise a
meaningful expected loss function (unlike the minimum mean squared error (MMSE)
estimator that minimises the mean squared loss). This paper addresses this
theoretical gap by presenting a decision-theoretic derivation of MAP estimation
in Bayesian models that are log-concave. A main novelty is that our analysis is
based on differential geometry, and proceeds as follows. First, we use the
underlying convex geometry of the Bayesian model to induce a Riemannian
geometry on the parameter space. We then use differential geometry to identify
the so-called natural or canonical loss function to perform Bayesian point
estimation in that Riemannian manifold. For log-concave models, this canonical
loss is the Bregman divergence associated with the negative log posterior
density. We then show that the MAP estimator is the only Bayesian estimator
that minimises the expected canonical loss, and that the posterior mean or MMSE
estimator minimises the dual canonical loss. We also study the question of MAP
and MSSE estimation performance in large scales and establish a universal bound
on the expected canonical error as a function of dimension, offering new
insights into the good performance observed in convex problems. These results
provide a new understanding of MAP and MMSE estimation in log-concave settings,
and of the multiple roles that convex geometry plays in imaging problems.Comment: Accepted for publication in SIAM Imaging Science
- …