2,044 research outputs found
When do Trajectories have Bounded Sensitivity to Cumulative Perturbations?
We investigate sensitivity to cumulative perturbations for a few dynamical
system classes of practical interest. A system is said to have bounded
sensitivity to cumulative perturbations (bounded sensitivity, for short) if an
additive disturbance leads to a change in the state trajectory that is bounded
by a constant multiple of the size of the cumulative disturbance. As our main
result, we show that there exist dynamical systems in the form of (negative)
gradient field of a convex function that have unbounded sensitivity. We show
that the result holds even when the convex potential function is piecewise
linear. This resolves a question raised in [1], wherein it was shown that the
(negative) (sub)gradient field of a piecewise linear and convex function has
bounded sensitivity if the number of linear pieces is finite. Our results
establish that the finiteness assumption is indeed necessary.
Among our other results, we provide a necessary and sufficient condition for
a linear dynamical system to have bounded sensitivity to cumulative
perturbations. We also establish that the bounded sensitivity property is
preserved, when a dynamical system with bounded sensitivity undergoes certain
transformations. These transformations include convolution, time
discretization, and spreading of a system (a transformation that captures
approximate solutions of a system)
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
- …