34 research outputs found

    Asymptotic stationarity and regularity for nonsmooth optimization problems

    Full text link
    Based on the tools of limiting variational analysis, we derive a sequential necessary optimality condition for nonsmooth mathematical programs which holds without any additional assumptions. In order to ensure that stationary points in this new sense are already Mordukhovich-stationary, the presence of a constraint qualification which we call AM-regularity is necessary. We investigate the relationship between AM-regularity and other constraint qualifications from nonsmooth optimization like metric (sub-)regularity of the underlying feasibility mapping. Our findings are applied to optimization problems with geometric and, particularly, disjunctive constraints. This way, it is shown that AM-regularity recovers recently introduced cone-continuity-type constraint qualifications, sometimes referred to as AKKT-regularity, from standard nonlinear and complementarity-constrained optimization. Finally, we discuss some consequences of AM-regularity for the limiting variational calculus.Comment: 30 pages, 2 figur

    Contributions to complementarity and bilevel programming in Banach spaces

    Get PDF
    In this thesis, we derive necessary optimality conditions for bilevel programming problems (BPPs for short) in Banach spaces. This rather abstract setting reflects our desire to characterize the local optimal solutions of hierarchical optimization problems in function spaces arising from several applications. Since our considerations are based on the tools of variational analysis introduced by Boris Mordukhovich, we study related properties of pointwise defined sets in function spaces. The presence of sequential normal compactness for such sets in Lebesgue and Sobolev spaces as well as the variational geometry of decomposable sets in Lebesgue spaces is discussed. Afterwards, we investigate mathematical problems with complementarity constraints (MPCCs for short) in Banach spaces which are closely related to BPPs. We introduce reasonable stationarity concepts and constraint qualifications which can be used to handle MPCCs. The relations between the mentioned stationarity notions are studied in the setting where the underlying complementarity cone is polyhedric. The results are applied to the situations where the complementarity cone equals the nonnegative cone in a Lebesgue space or is polyhedral. Next, we use the three main approaches of transforming a BPP into a single-level program (namely the presence of a unique lower level solution, the KKT approach, and the optimal value approach) to derive necessary optimality conditions for BPPs. Furthermore, we comment on the relation between the original BPP and the respective surrogate problem. We apply our findings to formulate necessary optimality conditions for three different classes of BPPs. First, we study a BPP with semidefinite lower level problem possessing a unique solution. Afterwards, we deal with bilevel optimal control problems with dynamical systems of ordinary differential equations at both decision levels. Finally, an optimal control problem of ordinary or partial differential equations with implicitly given pointwise state constraints is investigated

    On implicit variables in optimization theory

    Full text link
    Implicit variables of a mathematical program are variables which do not need to be optimized but are used to model feasibility conditions. They frequently appear in several different problem classes of optimization theory comprising bilevel programming, evaluated multiobjective optimization, or nonlinear optimization problems with slack variables. In order to deal with implicit variables, they are often interpreted as explicit ones. Here, we first point out that this is a light-headed approach which induces artificial locally optimal solutions. Afterwards, we derive various Mordukhovich-stationarity-type necessary optimality conditions which correspond to treating the implicit variables as explicit ones on the one hand, or using them only implicitly to model the constraints on the other. A detailed comparison of the obtained stationarity conditions as well as the associated underlying constraint qualifications will be provided. Overall, we proceed in a fairly general setting relying on modern tools of variational analysis. Finally, we apply our findings to different well-known problem classes of mathematical optimization in order to visualize the obtained theory.Comment: 33 page

    Convergence Properties of Monotone and Nonmonotone Proximal Gradient Methods Revisited

    Full text link
    Composite optimization problems, where the sum of a smooth and a merely lower semicontinuous function has to be minimized, are often tackled numerically by means of proximal gradient methods as soon as the lower semicontinuous part of the objective function is of simple enough structure. The available convergence theory associated with these methods (mostly) requires the derivative of the smooth part of the objective function to be (globally) Lipschitz continuous, and this might be a restrictive assumption in some practically relevant scenarios. In this paper, we readdress this classical topic and provide convergence results for the classical (monotone) proximal gradient method and one of its nonmonotone extensions which are applicable in the absence of (strong) Lipschitz assumptions. This is possible since, for the price of forgoing convergence rates, we omit the use of descent-type lemmas in our analysis.Comment: 23 page

    Why second-order sufficient conditions are, in a way, easy -- or -- revisiting calculus for second subderivatives

    Full text link
    In this paper, we readdress the classical topic of second-order sufficient optimality conditions for optimization problems with nonsmooth structure. Based on the so-called second subderivative of the objective function and of the indicator function associated with the feasible set, one easily obtains second-order sufficient optimality conditions of abstract form. In order to exploit further structure of the problem, e.g., composite terms in the objective function or feasible sets given as (images of) pre-images of closed sets under smooth transformations, to make these conditions fully explicit, we study calculus rules for the second subderivative under mild conditions. To be precise, we investigate a chain rule and a marginal function rule, which then also give a pre-image and image rule, respectively. As it turns out, the chain rule and the pre-image rule yield lower estimates desirable in order to obtain sufficient optimality conditions for free. Similar estimates for the marginal function and the image rule are valid under a comparatively mild inner calmness* assumption. Our findings are illustrated by several examples including problems from composite, disjunctive, and nonlinear second-order cone programming.Comment: 43 page

    Notes on the value function approach to multiobjective bilevel optimization

    Full text link
    This paper is concerned with the value function approach to multiobjective bilevel optimization which exploits a lower level frontier-type mapping in order to replace the hierarchical model of two interdependent multiobjective optimization problems by a single-level multiobjective optimization problem. As a starting point, different value-function-type reformulations are suggested and their relations are discussed. Here, we focus on the situations where the lower level problem is solved up to efficiency or weak efficiency, and an intermediate solution concept is suggested as well. We study the graph-closedness of the associated efficiency-type and frontier-type mappings. These findings are then used for two purposes. First, we investigate existence results in multiobjective bilevel optimization. Second, for the derivation of necessary optimality conditions via the value function approach, it is inherent to differentiate frontier-type mappings in a generalized way. Here, we are concerned with the computation of upper coderivative estimates for the frontier-type mapping associated with the setting where the lower level problem is solved up to weak efficiency. We proceed in two ways, relying, on the one hand, on a weak domination property and, on the other hand, on a scalarization approach. Throughout the paper, illustrative examples visualize our findings, the necessity of crucial assumptions, and some flaws in the related literature.Comment: 30 page
    corecore