8,576 research outputs found
Structured Sparsity: Discrete and Convex approaches
Compressive sensing (CS) exploits sparsity to recover sparse or compressible
signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity
is also used to enhance interpretability in machine learning and statistics
applications: While the ambient dimension is vast in modern data analysis
problems, the relevant information therein typically resides in a much lower
dimensional space. However, many solutions proposed nowadays do not leverage
the true underlying structure. Recent results in CS extend the simple sparsity
idea to more sophisticated {\em structured} sparsity models, which describe the
interdependency between the nonzero components of a signal, allowing to
increase the interpretability of the results and lead to better recovery
performance. In order to better understand the impact of structured sparsity,
in this chapter we analyze the connections between the discrete models and
their convex relaxations, highlighting their relative advantages. We start with
the general group sparse model and then elaborate on two important special
cases: the dispersive and the hierarchical models. For each, we present the
models in their discrete nature, discuss how to solve the ensuing discrete
problems and then describe convex relaxations. We also consider more general
structures as defined by set functions and present their convex proxies.
Further, we discuss efficient optimization solutions for structured sparsity
problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure
Shape Calculus for Shape Energies in Image Processing
Many image processing problems are naturally expressed as energy minimization
or shape optimization problems, in which the free variable is a shape, such as
a curve in 2d or a surface in 3d. Examples are image segmentation, multiview
stereo reconstruction, geometric interpolation from data point clouds. To
obtain the solution of such a problem, one usually resorts to an iterative
approach, a gradient descent algorithm, which updates a candidate shape
gradually deforming it into the optimal shape. Computing the gradient descent
updates requires the knowledge of the first variation of the shape energy, or
rather the first shape derivative. In addition to the first shape derivative,
one can also utilize the second shape derivative and develop a Newton-type
method with faster convergence. Unfortunately, the knowledge of shape
derivatives for shape energies in image processing is patchy. The second shape
derivatives are known for only two of the energies in the image processing
literature and many results for the first shape derivative are limiting, in the
sense that they are either for curves on planes, or developed for a specific
representation of the shape or for a very specific functional form in the shape
energy. In this work, these limitations are overcome and the first and second
shape derivatives are computed for large classes of shape energies that are
representative of the energies found in image processing. Many of the formulas
we obtain are new and some generalize previous existing results. These results
are valid for general surfaces in any number of dimensions. This work is
intended to serve as a cookbook for researchers who deal with shape energies
for various applications in image processing and need to develop algorithms to
compute the shapes minimizing these energies
Parametric Level Set Methods for Inverse Problems
In this paper, a parametric level set method for reconstruction of obstacles
in general inverse problems is considered. General evolution equations for the
reconstruction of unknown obstacles are derived in terms of the underlying
level set parameters. We show that using the appropriate form of parameterizing
the level set function results a significantly lower dimensional problem, which
bypasses many difficulties with traditional level set methods, such as
regularization, re-initialization and use of signed distance function.
Moreover, we show that from a computational point of view, low order
representation of the problem paves the path for easier use of Newton and
quasi-Newton methods. Specifically for the purposes of this paper, we
parameterize the level set function in terms of adaptive compactly supported
radial basis functions, which used in the proposed manner provides flexibility
in presenting a larger class of shapes with fewer terms. Also they provide a
"narrow-banding" advantage which can further reduce the number of active
unknowns at each step of the evolution. The performance of the proposed
approach is examined in three examples of inverse problems, i.e., electrical
resistance tomography, X-ray computed tomography and diffuse optical
tomography
- …