6 research outputs found

    Discontinuous piecewise differentiable optimization I : theory

    Get PDF
    A theoretical framework and a practical algorithm are presented to solve discontinuous piecewise linear optimization problems. A penalty approach allows one to consider such problems subject to a wide range of constraints involving piecewise linear functions. Although the theory is expounded in detail in the special case of discontinuous piecewise linear functions, it is straightforwardly extendable, using standard non linear programming techniques, to the nonlinear (discontinuous piecewise differentiable) situation to yield a first order algorithm. This work is presented in two parts. We introduce the theory in this first paper. The descent algorithm which is elaborated uses active set and projected gradient approaches. It is generalization of the ideas used by Conn to deal with nonsmoothness in the l1 exact penalty function, and it is based on the notion of decomposition of a function into a smooth and a nonsmooth part. In an accompanying paper, we shall tackle constraints via a penalty approach, we shall discuss the degenerate situation, the implementation of the algorithm, and numerical results will be presented

    Constrained Global Optimization by Smoothing

    Full text link
    This paper proposes a novel technique called "successive stochastic smoothing" that optimizes nonsmooth and discontinuous functions while considering various constraints. Our methodology enables local and global optimization, making it a powerful tool for many applications. First, a constrained problem is reduced to an unconstrained one by the exact nonsmooth penalty function method, which does not assume the existence of the objective function outside the feasible area and does not require the selection of the penalty coefficient. This reduction is exact in the case of minimization of a lower semicontinuous function under convex constraints. Then the resulting objective function is sequentially smoothed by the kernel method starting from relatively strong smoothing and with a gradually vanishing degree of smoothing. The finite difference stochastic gradient descent with trajectory averaging minimizes each smoothed function locally. Finite differences over stochastic directions sampled from the kernel estimate the stochastic gradients of the smoothed functions. We investigate the convergence rate of such stochastic finite-difference method on convex optimization problems. The "successive smoothing" algorithm uses the results of previous optimization runs to select the starting point for optimizing a consecutive, less smoothed function. Smoothing provides the "successive smoothing" method with some global properties. We illustrate the performance of the "successive stochastic smoothing" method on test-constrained optimization problems from the literature.Comment: 17 pages, 1 tabl

    Gradient-only approaches to avoid spurious local minima in unconstrained optimization

    Get PDF
    We reflect on some theoretical aspects of gradient-only optimization for the unconstrained optimization of objective functions containing non-physical step or jump discontinuities. This kind of discontinuity arises when the optimization problem is based on the solutions of systems of partial differential equations, in combination with variable discretization techniques (e.g. remeshing in spatial domains, and/or variable time stepping in temporal domains). These discontinuities, which may cause local minima, are artifacts of the numerical strategies used and should not influence the solution to the optimization problem. Although the discontinuities imply that the gradient field is not defined everywhere, the gradient field associated with the computational scheme can nevertheless be computed everywhere; this field is denoted the associated gradient field. We demonstrate that it is possible to overcome attraction to the local minima if only associated gradient information is used. Various gradient-only algorithmic options are discussed. A salient feature of our approach is that variable discretization strategies, so important in the numerical solution of partial differential equations, can be combined with efficient local optimization algorithms.National Research Foundation (NRF)http://link.springer.com/journal/11081hb201

    Mixed Integer Linear Programming Formulation Techniques

    Get PDF
    A wide range of problems can be modeled as Mixed Integer Linear Programming (MIP) problems using standard formulation techniques. However, in some cases the resulting MIP can be either too weak or too large to be effectively solved by state of the art solvers. In this survey we review advanced MIP formulation techniques that result in stronger and/or smaller formulations for a wide class of problems
    corecore