research

Nonsmooth Optimization and Descent Methods

Abstract

Nonsmooth optimization is a field of research actively pursued at IIASA. In this paper, we show "what" it is; a thing that cannot be guessed easily from its definition by a negative statement. Also, we show "why" it exists at IIASA, by exhibiting a large field of applications ranging from the theory of nonlinear programming to the computation of economic equilibria, including the general concept of decentralization. Finally, we show "how" it can be done, outlining the state of the art, and developing a new algorithm that realizes a synthesis between the concepts commonly used in differentiable as well as nondifferentiable optimization. Our approach is as non-technical as possible, and we hope that a nonacquainted reader will be able to follow a non-negligible part of our development. In Section 1, we give the basic concepts underlying nonsmooth optimization and show what it consists of. We also outline the classical methods, which have existed since 1959, aimed at optimizing nondifferentiable problems. In Section 2, we give a list of possible applications, including acceleration of gradient type methods, general decomposition--by prices, by resources, and Benders decomposition--minimax problems, and computation of economic equilibria. In Section 3, we give the most modern methods for nonsmooth optimization, defined around 1975, which were the first general descent methods. In Section 4, we develop a new descent method, which is based on concepts of variable metric, cutting plan approximation and feasible directions. We study its motivation, its convergence, and its flexibility

    Similar works