16 research outputs found
Integration of Fenchel subdifferentials revisited
We obtain a simple integration formula for the Fenchel subdifferentials on Euclidean spaces and analyze some of its consequences. For functions defined on locally convex spaces, we present a similar result in terms of ε-subdifferentials
Convex Geometry and its Applications (hybrid meeting)
The geometry of convex domains in Euclidean space plays a central role
in several branches of mathematics: functional and harmonic analysis, the
theory of PDE, linear programming and, increasingly, in the study of
algorithms in computer science.
The purpose
of this meeting was to bring together researchers from the analytic, geometric and probabilistic
groups who have contributed to these developments
Descent modulus and applications
The norm of the gradient f (x) measures the maximum descent of a
real-valued smooth function f at x. For (nonsmooth) convex functions, this is
expressed by the distance dist(0, f (x)) of the subdifferential to
the origin, while for general real-valued functions defined on metric spaces by
the notion of metric slope |f |(x). In this work we propose an
axiomatic definition of descent modulus T [f ](x) of a real-valued function f
at every point x, defined on a general (not necessarily metric) space. The
definition encompasses all above instances as well as average descents for
functions defined on probability spaces. We show that a large class of
functions are completely determined by their descent modulus and corresponding
critical values. This result is already surprising in the smooth case: a
one-dimensional information (norm of the gradient) turns out to be almost as
powerful as the knowledge of the full gradient mapping. In the nonsmooth case,
the key element for this determination result is the break of symmetry induced
by a downhill orientation, in the spirit of the definition of the metric slope.
The particular case of functions defined on finite spaces is studied in the
last section. In this case, we obtain an explicit classification of descent
operators that are, in some sense, typical
Recommended from our members
Operator Splitting Methods for Convex and Nonconvex Optimization
This dissertation focuses on a family of optimization methods called operator splitting methods. They solve complicated problems by decomposing the problem structure into simpler pieces and make progress on each of them separately. Over the past two decades, there has been a resurgence of interests in these methods as the demand for solving structured large-scale problems grew. One of the major challenges for splitting methods is their sensitivity to ill-conditioning, which often makes them struggle to achieve a high order of accuracy. Furthermore, their classical analyses are restricted to the nice settings where solutions do exist, and everything is convex. Much less is known when either of these assumptions breaks down.This work aims to address the issues above. Specifically, we propose a novel acceleration technique called inexact preconditioning, which exploits second-order information at relatively low computation cost. We also show that certain splitting methods still work on problems without solutions, in the sense that their iterates provide information on what goes wrong and how to fix. Finally, for nonconvex problems with saddle points, we show that almost surely, splitting methods will only converge to the local minimums under certain assumptions
International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book
The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions.
This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more
Analysis of gradient descents in random energies and heat baths
This thesis concerns the mathematical analysis of random gradient descent
evolutions as models for rate-independent dissipative systems under the influence
of thermal effects. The basic notions of the theory of gradient descents
(especially rate-independent evolutions) are reviewed in chapter 2.
Chapters 3 and 4 focus on the scaling regime in which the microstructure
dominates the thermal effects and comprise a rigorous justification of rateindependent
processes in smooth, convex energies as scaling limits of ratedependent
gradient descents in energies that have rapidly-oscillating random
microstructure: chapter 3 treats the one-dimensional case with quite a broad
class of random microstructures; chapter 4 treats a case in which the microstructure
is modeled by a sum of “dent functions” that are scattered in
Rn using a suitable point process. Chapters 5 and 6 focus on the opposite
scaling regime: a gradient descent system (typically a rate-independent process)
is placed in contact with a heat bath. The method used to “thermalize”
a gradient descent is an interior-point regularization of the Moreau–Yosida
incremental problem for the original gradient descent. Chapter 5 treats
the heuristics and generalities; chapter 6 treats the case of 1-homogeneous
dissipation (rate independence) and shows that the heat bath destroys the
rate independence in a controlled and deterministic way, and that the effective
dynamics are a gradient descent in the original energetic potential
but with respect to a different and non-trivial effective dissipation potential.
The appendices contain some auxiliary definitions and results, most of them
standard in the literature, that are used in the main text