7,176 research outputs found
Disparity and Optical Flow Partitioning Using Extended Potts Priors
This paper addresses the problems of disparity and optical flow partitioning
based on the brightness invariance assumption. We investigate new variational
approaches to these problems with Potts priors and possibly box constraints.
For the optical flow partitioning, our model includes vector-valued data and an
adapted Potts regularizer. Using the notation of asymptotically level stable
functions we prove the existence of global minimizers of our functionals. We
propose a modified alternating direction method of minimizers. This iterative
algorithm requires the computation of global minimizers of classical univariate
Potts problems which can be done efficiently by dynamic programming. We prove
that the algorithm converges both for the constrained and unconstrained
problems. Numerical examples demonstrate the very good performance of our
partitioning method
Linear convergence of accelerated conditional gradient algorithms in spaces of measures
A class of generalized conditional gradient algorithms for the solution of
optimization problem in spaces of Radon measures is presented. The method
iteratively inserts additional Dirac-delta functions and optimizes the
corresponding coefficients. Under general assumptions, a sub-linear
rate in the objective functional is obtained, which is sharp
in most cases. To improve efficiency, one can fully resolve the
finite-dimensional subproblems occurring in each iteration of the method. We
provide an analysis for the resulting procedure: under a structural assumption
on the optimal solution, a linear convergence rate is
obtained locally.Comment: 30 pages, 7 figure
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
New Algebraic Formulation of Density Functional Calculation
This article addresses a fundamental problem faced by the ab initio
community: the lack of an effective formalism for the rapid exploration and
exchange of new methods. To rectify this, we introduce a novel, basis-set
independent, matrix-based formulation of generalized density functional
theories which reduces the development, implementation, and dissemination of
new ab initio techniques to the derivation and transcription of a few lines of
algebra. This new framework enables us to concisely demystify the inner
workings of fully functional, highly efficient modern ab initio codes and to
give complete instructions for the construction of such for calculations
employing arbitrary basis sets. Within this framework, we also discuss in full
detail a variety of leading-edge ab initio techniques, minimization algorithms,
and highly efficient computational kernels for use with scalar as well as
shared and distributed-memory supercomputer architectures
- …