5,035 research outputs found
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Mathematical Methods in Tomography
This is the seventh Oberwolfach conference on the mathematics of tomography, the first one taking place in 1980. Tomography is the most popular of a series of medical and scientific imaging techniques that have been developed since the mid seventies of the last century
Mixed-integer Nonlinear Optimization: a hatchery for modern mathematics
The second MFO Oberwolfach Workshop on Mixed-Integer Nonlinear Programming (MINLP) took place between 2nd and 8th June 2019. MINLP refers to one of the hardest Mathematical Programming (MP) problem classes, involving both nonlinear functions as well as continuous and integer decision variables. MP is a formal language for describing optimization problems, and is traditionally part of Operations Research (OR), which is itself at the intersection of mathematics, computer science, engineering and econometrics. The scientific program has covered the three announced areas (hierarchies of approximation, mixed-integer nonlinear optimal control, and dealing with uncertainties) with a variety of tutorials, talks, short research announcements, and a special "open problems'' session
Recommended from our members
Estimating planetary heat flow from a shallow subsurface heat flow measurement
This study investigates the feasibility of estimating planetary heat flow from a shallow subsurface heat flow measurement with a Function Specification Inversion (FSI) model. Heat flow is a product of the thermal conductivity and gradient at depth; these are measured and therefore contain errors. The model estimates other parameters, as well as the former, while not explicitly accounting for temperature dependent thermal properties.
The heat flow is decomposed into steady state basal (planetary) and unsteady state (related to the surface temperature variation) heat flow components. Surface heat flow is typically several orders of magnitude higher than the planetary heat flow; therefore unsteady components in a shallow subsurface heat flow measurement may mask the planetary heat flow. The extent of masking positively correlates with the skin depth and amplitude of the surface heat flow, and negatively correlates with the magnitude of the planetary heat flow.
The planetary heat flow is estimated by inverting the temperature measurement and optimising the basal heat flow. The basal heat flow is most effectively optimized from instantaneous measurements, taken when the surface temperature is relatively constant. Long-period measurements, while more accurately optimized, introduce more unsteady temperature gradients, thereby increasing the ill-determinacy and instability of the problem. The model tolerates errors up to 25% in simultaneous optimization of several unknown parameters, with related errors in the optimized basal heat flow.
On Mars, the heat flow is optimized to within 10% for measurements over at least twice the skin depth and 0.5 of a Martian year, or at least five times the skin depth and 0.25 of a Martian year. On Mercury, temperature amplitudes control optimized heat flow accuracy; sensor penetration depths well below three skin depths are required. On Vesta, very low heat flows render FSI ineffective with a noise amplitude of 1 mK
- …