2,408 research outputs found
A function space framework for structural total variation regularization with applications in inverse problems
In this work, we introduce a function space setting for a wide class of
structural/weighted total variation (TV) regularization methods motivated by
their applications in inverse problems. In particular, we consider a
regularizer that is the appropriate lower semi-continuous envelope (relaxation)
of a suitable total variation type functional initially defined for
sufficiently smooth functions. We study examples where this relaxation can be
expressed explicitly, and we also provide refinements for weighted total
variation for a wide range of weights. Since an integral characterization of
the relaxation in function space is, in general, not always available, we show
that, for a rather general linear inverse problems setting, instead of the
classical Tikhonov regularization problem, one can equivalently solve a
saddle-point problem where no a priori knowledge of an explicit formulation of
the structural TV functional is needed. In particular, motivated by concrete
applications, we deduce corresponding results for linear inverse problems with
norm and Poisson log-likelihood data discrepancy terms. Finally, we provide
proof-of-concept numerical examples where we solve the saddle-point problem for
weighted TV denoising as well as for MR guided PET image reconstruction
Structured Sparsity: Discrete and Convex approaches
Compressive sensing (CS) exploits sparsity to recover sparse or compressible
signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity
is also used to enhance interpretability in machine learning and statistics
applications: While the ambient dimension is vast in modern data analysis
problems, the relevant information therein typically resides in a much lower
dimensional space. However, many solutions proposed nowadays do not leverage
the true underlying structure. Recent results in CS extend the simple sparsity
idea to more sophisticated {\em structured} sparsity models, which describe the
interdependency between the nonzero components of a signal, allowing to
increase the interpretability of the results and lead to better recovery
performance. In order to better understand the impact of structured sparsity,
in this chapter we analyze the connections between the discrete models and
their convex relaxations, highlighting their relative advantages. We start with
the general group sparse model and then elaborate on two important special
cases: the dispersive and the hierarchical models. For each, we present the
models in their discrete nature, discuss how to solve the ensuing discrete
problems and then describe convex relaxations. We also consider more general
structures as defined by set functions and present their convex proxies.
Further, we discuss efficient optimization solutions for structured sparsity
problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure
- …