80 research outputs found
Second-order optimality conditions for problems with C1 data
AbstractIn this paper we obtain second-order optimality conditions of Karush–Kuhn–Tucker type and Fritz John one for a problem with inequality constraints and a set constraint in nonsmooth settings using second-order directional derivatives. In the necessary conditions we suppose that the objective function and the active constraints are continuously differentiable, but their gradients are not necessarily locally Lipschitz. In the sufficient conditions for a global minimum x¯ we assume that the objective function is differentiable at x¯ and second-order pseudoconvex at x¯, a notion introduced by the authors [I. Ginchev, V.I. Ivanov, Higher-order pseudoconvex functions, in: I.V. Konnov, D.T. Luc, A.M. Rubinov (Eds.), Generalized Convexity and Related Topics, in: Lecture Notes in Econom. and Math. Systems, vol. 583, Springer, 2007, pp. 247–264], the constraints are both differentiable and quasiconvex at x¯. In the sufficient conditions for an isolated local minimum of order two we suppose that the problem belongs to the class C1,1. We show that they do not hold for C1 problems, which are not C1,1 ones. At last a new notion parabolic local minimum is defined and it is applied to extend the sufficient conditions for an isolated local minimum from problems with C1,1 data to problems with C1 one
First-Order Conditions for Optimization Problems with Quasiconvex Inequality Constraints
2000 Mathematics Subject Classification: 90C46, 90C26, 26B25, 49J52.The constrained optimization problem min f(x), gj(x) ≤ 0 (j = 1,…p) is considered, where f : X → R and gj : X → R are nonsmooth functions with domain X ⊂ Rn. First-order necessary and first-order sufficient optimality conditions are obtained when gj are quasiconvex functions. Two are the main features of the paper: to treat nonsmooth problems it makes use of Dini derivatives; to obtain more sensitive conditions, it admits directionally dependent multipliers. The two cases, where the Lagrange function satisfies a non-strict and a strict inequality, are considered. In the case of a non-strict inequality pseudoconvex functions are involved and in their terms some properties of the convex programming problems are generalized. The efficiency of the obtained conditions is illustrated on examples
Isolated minimizers and proper efficiency for C0,1 constrained vector optimization problems.
We consider the vector optimization problem min(C) f (x), g(x) is an element of - K, where f:R-n -> R-m and g: R-n -> R-p are C-0,C-1 (i.e. locally Lipschitz) functions and C subset of R-m and K subset of R-p are closed convex cones. We give several notions of solution (efficiency concepts), among them the notion of properly efficient point (p-minimizer) of order k and the notion of isolated minimizer of order k. We show that each isolated minimizer of order k >= 1 is a p-minimizer of order k. The possible reversal of this statement in the case k = 1 is studied through first order necessary and sufficient conditions in terms of Dim derivatives. Observing that the optimality conditions for the constrained problem coincide with those for a suitable unconstrained problem, we introduce sense I solutions (those of the initial constrained problem) and sense II solutions (those of the unconstrained problem). Further, we obtain relations between sense I and sense II isolated minimizers and p-minimizers
The Radius of Metric Subregularity
There is a basic paradigm, called here the radius of well-posedness, which
quantifies the "distance" from a given well-posed problem to the set of
ill-posed problems of the same kind. In variational analysis, well-posedness is
often understood as a regularity property, which is usually employed to measure
the effect of perturbations and approximations of a problem on its solutions.
In this paper we focus on evaluating the radius of the property of metric
subregularity which, in contrast to its siblings, metric regularity, strong
regularity and strong subregularity, exhibits a more complicated behavior under
various perturbations. We consider three kinds of perturbations: by Lipschitz
continuous functions, by semismooth functions, and by smooth functions,
obtaining different expressions/bounds for the radius of subregularity, which
involve generalized derivatives of set-valued mappings. We also obtain
different expressions when using either Frobenius or Euclidean norm to measure
the radius. As an application, we evaluate the radius of subregularity of a
general constraint system. Examples illustrate the theoretical findings.Comment: 20 page
Calmness of partially perturbed linear systems with an application to the central path
In this paper we develop point-based formulas for the calmness modulus of the feasible set mapping in the context of linear inequality systems with a fixed abstract constraint and (partially) perturbed linear constraints. The case of totally perturbed linear systems was previously analyzed in [Cánovas MJ, López MA, Parra J, et al. Calmness of the feasible set mapping for linear inequality systems. Set-Valued Var Anal. 2014;22:375–389, Section 5]. We point out that the presence of such an abstract constraint yields the current paper to appeal to a notable different methodology with respect to previous works on the calmness modulus in linear programming. The interest of this model comes from the fact that partially perturbed systems naturally appear in many applications. As an illustration, the paper includes an example related to the classical central path construction. In this example we consider a certain feasible set mapping whose calmness modulus provides a measure of the convergence of the central path. Finally, we underline the fact that the expression for the calmness modulus obtained in this paper is (conceptually) implementable as far as it only involves the nominal data.This research has been partially supported by Grant MTM2014-59179-C2-(1,2)-P from MINECO, Spain, and FEDER ‘Una manera de hacer Europa’, European Union
Optimality conditions in convex multiobjective SIP
The purpose of this paper is to characterize the weak efficient solutions, the efficient solutions, and the isolated efficient solutions of a given vector optimization problem with finitely many convex objective functions and infinitely many convex constraints. To do this, we introduce new and already known data qualifications (conditions involving the constraints and/or the objectives) in order to get optimality conditions which are expressed in terms of either Karusk–Kuhn–Tucker multipliers or a new gap function associated with the given problem.This research was partially cosponsored by the Ministry of Economy and Competitiveness (MINECO) of Spain, and by the European Regional Development Fund (ERDF) of the European Commission, Project MTM2014-59179-C2-1-P
- …