625 research outputs found
Characterizations of Hankel multipliers
We give characterizations of radial Fourier multipliers as acting on radial
L^p-functions, 1<p<2d/(d+1), in terms of Lebesgue space norms for Fourier
localized pieces of the convolution kernel. This is a special case of
corresponding results for general Hankel multipliers. Besides L^p-L^q bounds we
also characterize weak type inequalities and intermediate inequalities
involving Lorentz spaces. Applications include results on interpolation of
multiplier spaces.Comment: Final revised version to appear in Mathematische Annale
Convergence of the Forward-Backward Algorithm: Beyond the Worst Case with the Help of Geometry
We provide a comprehensive study of the convergence of forward-backward
algorithm under suitable geometric conditions leading to fast rates. We present
several new results and collect in a unified view a variety of results
scattered in the literature, often providing simplified proofs. Novel
contributions include the analysis of infinite dimensional convex minimization
problems, allowing the case where minimizers might not exist. Further, we
analyze the relation between different geometric conditions, and discuss novel
connections with a priori conditions in linear inverse problems, including
source conditions, restricted isometry properties and partial smoothness
A dynamic gradient approach to Pareto optimization with nonsmooth convex objective functions
In a general Hilbert framework, we consider continuous gradient-like
dynamical systems for constrained multiobjective optimization involving
non-smooth convex objective functions. Our approach is in the line of a
previous work where was considered the case of convex di erentiable objective
functions. Based on the Yosida regularization of the subdi erential operators
involved in the system, we obtain the existence of strong global trajectories.
We prove a descent property for each objective function, and the convergence of
trajectories to weak Pareto minima. This approach provides a dynamical
endogenous weighting of the objective functions. Applications are given to
cooperative games, inverse problems, and numerical multiobjective optimization
Splitting methods with variable metric for KL functions
We study the convergence of general abstract descent methods applied to a
lower semicontinuous nonconvex function f that satisfies the
Kurdyka-Lojasiewicz inequality in a Hilbert space. We prove that any precompact
sequence converges to a critical point of f and obtain new convergence rates
both for the values and the iterates. The analysis covers alternating versions
of the forward-backward method with variable metric and relative errors. As an
example, a nonsmooth and nonconvex version of the Levenberg-Marquardt algorithm
is detailled
Model Consistency for Learning with Mirror-Stratifiable Regularizers
Low-complexity non-smooth convex regularizers are routinely used to impose
some structure (such as sparsity or low-rank) on the coefficients for linear
predictors in supervised learning. Model consistency consists then in selecting
the correct structure (for instance support or rank) by regularized empirical
risk minimization.
It is known that model consistency holds under appropriate non-degeneracy
conditions. However such conditions typically fail for highly correlated
designs and it is observed that regularization methods tend to select larger
models.
In this work, we provide the theoretical underpinning of this behavior using
the notion of mirror-stratifiable regularizers. This class of regularizers
encompasses the most well-known in the literature, including the or
trace norms. It brings into play a pair of primal-dual models, which in turn
allows one to locate the structure of the solution using a specific dual
certificate.
We also show how this analysis is applicable to optimal solutions of the
learning problem, and also to the iterates computed by a certain class of
stochastic proximal-gradient algorithms.Comment: 14 pages, 4 figure
- …