13 research outputs found

    A survey on C 1,1 fuctions: theory, numerical methods and applications

    Get PDF
    In this paper we survey some notions of generalized derivative for C 1,1 functions. Furthermore some optimality conditions and numerical methods for nonlinear minimization problems involving C1,1 data are studied.

    Nondifferentiable Optimization: Motivations and Applications

    Get PDF
    IIASA has been involved in research on nondifferentiable optimization since 1976. The Institute's research in this field has been very productive, leading to many important theoretical, algorithmic and applied results. Nondifferentiable optimization has now become a recognized and rapidly developing branch of mathematical programming. To continue this tradition and to review developments in this field IIASA held this Workshop in Sopron (Hungary) in September 1984. This volume contains selected papers presented at the Workshop. It is divided into four sections dealing with the following topics: (I) Concepts in Nonsmooth Analysis; (II) Multicriteria Optimization and Control Theory; (III) Algorithms and Optimization Methods; (IV) Stochastic Programming and Applications

    Directional subdifferential of the value function

    Full text link
    The directional subdifferential of the value function gives an estimate on how much the optimal value changes under a perturbation in a certain direction. In this paper we derive upper estimates for the directional limiting and singular subdifferential of the value function for a very general parametric optimization problem. We obtain a characterization for the directional Lipschitzness of a locally lower semicontinuous function in terms of the directional subdifferentials. Based on this characterization and the derived upper estimate for the directional singular subdifferential, we are able to obtain a sufficient condition for the directional Lipschitzness of the value function. Finally, we specify these results for various cases when all functions involved are smooth, when the perturbation is additive, when the constraint is independent of the parameter, or when the constraints are equalities and inequalities. Our results extend the corresponding results on the sensitivity of the value function to allow directional perturbations. Even in the case of full perturbations, our results recover or even extend some existing results, including the Danskin's theorem

    Conic Optimization: Optimal Partition, Parametric, and Stability Analysis

    Get PDF
    A linear conic optimization problem consists of the minimization of a linear objective function over the intersection of an affine space and a closed convex cone. In recent years, linear conic optimization has received significant attention, partly due to the fact that we can take advantage of linear conic optimization to reformulate and approximate intractable optimization problems. Steady advances in computational optimization have enabled us to approximately solve a wide variety of linear conic optimization problems in polynomial time. Nevertheless, preprocessing methods, rounding procedures and sensitivity analysis tools are still the missing parts of conic optimization solvers. Given the output of a conic optimization solver, we need methodologies to generate approximate complementary solutions or to speed up the convergence to an exact optimal solution. A preprocessing method reduces the size of a problem by finding the minimal face of the cone which contains the set of feasible solutions. However, such a preprocessing method assumes the knowledge of an exact solution. More importantly, we need robust sensitivity and post-optimal analysis tools for an optimal solution of a linear conic optimization problem. Motivated by the vital importance of linear conic optimization, we take active steps to fill this gap.This thesis is concerned with several aspects of a linear conic optimization problem, from algorithm through solution identification, to parametric analysis, which have not been fully addressed in the literature. We specifically focus on three special classes of linear conic optimization problems, namely semidefinite and second-order conic optimization, and their common generalization, symmetric conic optimization. We propose a polynomial time algorithm for symmetric conic optimization problems. We show how to approximate/identify the optimal partition of semidefinite optimization and second-order conic optimization, a concept which has its origin in linear optimization. Further, we use the optimal partition information to either generate an approximate optimal solution or to speed up the convergence of a solution identification process to the unique optimal solution of the problem. Finally, we study the parametric analysis of semidefinite and second-order conic optimization problems. We investigate the behavior of the optimal partition and the optimal set mapping under perturbation of the objective function vector

    On partial smoothness, tilt stability and the VU-decomposition

    Get PDF
    Under the assumption of prox-regularity and the presence of a tilt stable local minimum we are able to show that a (Formula presented.) like decomposition gives rise to the existence of a smooth manifold on which the function in question coincides locally with a smooth function

    MS FT-2-2 7 Orthogonal polynomials and quadrature: Theory, computation, and applications

    Get PDF
    Quadrature rules find many applications in science and engineering. Their analysis is a classical area of applied mathematics and continues to attract considerable attention. This seminar brings together speakers with expertise in a large variety of quadrature rules. It is the aim of the seminar to provide an overview of recent developments in the analysis of quadrature rules. The computation of error estimates and novel applications also are described
    corecore