53 research outputs found
Proximal level bundle methods for convex nondifferentiable optimization, saddle-point problems and variational inequalities
Résumé disponible dans les fichiers attaché
Optimal Convergence Rates for the Proximal Bundle Method
We study convergence rates of the classic proximal bundle method for a
variety of nonsmooth convex optimization problems. We show that, without any
modification, this algorithm adapts to converge faster in the presence of
smoothness or a H\"older growth condition. Our analysis reveals that with a
constant stepsize, the bundle method is adaptive, yet it exhibits suboptimal
convergence rates. We overcome this shortcoming by proposing nonconstant
stepsize schemes with optimal rates. These schemes use function information
such as growth constants, which might be prohibitive in practice. We provide a
parallelizable variant of the bundle method that can be applied without prior
knowledge of function parameters while maintaining near-optimal rates. The
practical impact of this scheme is limited since we incur a (parallelizable)
log factor in the complexity. These results improve on the scarce existing
convergence rates and provide a unified analysis approach across problem
settings and algorithmic details. Numerical experiments support our findings
The Barzilai and Borwein gradient method with nonmonotone line search for nonsmooth convex optimization problems
The Barzilai and Borwein gradient algorithm has received a great deal of attention in recent decades since it is simple and effective for smooth optimization problems. Whether can it be extended to solve nonsmooth problems? In this paper, we answer this question positively. The Barzilai and Borwein gradient algorithm combined with a nonmonotone line search technique is proposed for nonsmooth convex minimization. The global convergence of the given algorithm is established under suitable conditions. Numerical results show that this method is efficient
A unified analysis of a class of proximal bundle methods for solving hybrid convex composite optimization problems
This paper presents a proximal bundle (PB) framework based on a generic
bundle update scheme for solving the hybrid convex composite optimization
(HCCO) problem and establishes a common iteration-complexity bound for any
variant belonging to it. As a consequence, iteration-complexity bounds for
three PB variants based on different bundle update schemes are obtained in the
HCCO context for the first time and in a unified manner. While two of the PB
variants are universal (i.e., their implementations do not require parameters
associated with the HCCO instance), the other newly (as far as the authors are
aware of) proposed one is not but has the advantage that it generates simple,
namely one-cut, bundle models. The paper also presents a universal adaptive PB
variant (which is not necessarily an instance of the framework) based on
one-cut models and shows that its iteration-complexity is the same as the two
aforementioned universal PB variants.Comment: 31 page
Nondifferentiable Optimization: Motivations and Applications
IIASA has been involved in research on nondifferentiable optimization since 1976. The Institute's research in this field has been very productive, leading to many important theoretical, algorithmic and applied results. Nondifferentiable optimization has now become a recognized and rapidly developing branch of mathematical programming. To continue this tradition and to review developments in this field IIASA held this Workshop in Sopron (Hungary) in September 1984.
This volume contains selected papers presented at the Workshop. It is divided into four sections dealing with the following topics: (I) Concepts in Nonsmooth Analysis; (II) Multicriteria Optimization and Control Theory; (III) Algorithms and Optimization Methods; (IV) Stochastic Programming and Applications
- …