2,423 research outputs found

    Optimality conditions for scalar and vector optimization problems with quasiconvex inequality constraints

    Get PDF
    Let X be a real linear space, X0 X a convex set, Y and Z topological real linear spaces. The constrained optimization problem minCf(x), g(x) 2 -K is considered, where f : X0 ! Y and g : X0 ! Z are given (nonsmooth) functions, and C Y and K Z are closed convex cones. The weakly efficient solutions (w-minimizers) of this problem are investigated. When g obeys quasiconvex properties, first-order necessary and first-order sufficient optimality conditions in terms of Dini directional derivatives are obtained. In the special case of problems with pseudoconvex data it is shown that these conditions characterize the global w-minimizers and generalize known results from convex vector programming. The obtained results are applied to the special case of problems with finite dimensional image spaces and ordering cones the positive orthants, in particular to scalar problems with quasiconvex constraints. It is shown, that the quasiconvexity of the constraints allows to formulate the optimality conditions using the more simple single valued Dini derivatives instead of the set valued ones. Key words: Vector optimization, nonsmooth optimization, quasiconvex vector functions, pseudoconvex vector functions, Dini derivatives, quasiconvex programming, Kuhn-Tucker conditions..

    A Minty variational principle for set optimization

    Full text link
    Extremal problems are studied involving an objective function with values in (order) complete lattices of sets generated by so called set relations. Contrary to the popular paradigm in vector optimization, the solution concept for such problems, introduced by F. Heyde and A. L\"ohne, comprises the attainment of the infimum as well as a minimality property. The main result is a Minty type variational inequality for set optimization problems which provides a sufficient optimality condition under lower semicontinuity assumptions and a necessary condition under appropriate generalized convexity assumptions. The variational inequality is based on a new Dini directional derivative for set-valued functions which is defined in terms of a "lattice difference quotient": A residual operation in a lattice of sets replaces the inverse addition in linear spaces. Relationships to families of scalar problems are pointed out and used for proofs: The appearance of improper scalarizations poses a major difficulty which is dealt with by extending known scalar results such as Diewert's theorem to improper functions

    Quelques thèmes en l'analyse variationnelle et optimisation

    Get PDF
    In this thesis, we first study the theory of [gamma]-limits. Besides some basic properties of [gamma]-limits,expressions of sequential [gamma]-limits generalizing classical results of Greco are presented. These limits also give us a clue to a unified classification of derivatives and tangent cones. Next, we develop an approach to generalized differentiation theory. This allows us to deal with several generalized derivatives of set-valued maps defined directly in primal spaces, such as variational sets, radial sets, radial derivatives, Studniarski derivatives. Finally, we study calculus rules of these derivatives and applications related to optimality conditions and sensitivity analysis.Dans cette thèse, j’étudie d’abord la théorie des [gamma]-limites. En dehors de quelques propriétés fondamentales des [gamma]-limites, les expressions de [gamma]-limites séquentielles généralisant des résultats de Greco sont présentées. En outre, ces limites nous donnent aussi une idée d’une classification unifiée de la tangence et la différentiation généralisée. Ensuite, je développe une approche des théories de la différentiation généralisée. Cela permet de traiter plusieurs dérivées généralisées des multi-applications définies directement dans l’espace primal, tels que des ensembles variationnels,des ensembles radiaux, des dérivées radiales, des dérivées de Studniarski. Finalement, j’étudie les règles de calcul de ces dérivées et les applications liées aux conditions d’optimalité et à l’analyse de sensibilité

    Shape optimisation for a class of semilinear variational inequalities with applications to damage models

    Get PDF
    The present contribution investigates shape optimisation problems for a class of semilinear elliptic variational inequalities with Neumann boundary conditions. Sensitivity estimates and material derivatives are firstly derived in an abstract operator setting where the operators are defined on polyhedral subsets of reflexive Banach spaces. The results are then refined for variational inequalities arising from minimisation problems for certain convex energy functionals considered over upper obstacle sets in H1H^1. One particularity is that we allow for dynamic obstacle functions which may arise from another optimisation problems. We prove a strong convergence property for the material derivative and establish state-shape derivatives under regularity assumptions. Finally, as a concrete application from continuum mechanics, we show how the dynamic obstacle case can be used to treat shape optimisation problems for time-discretised brittle damage models for elastic solids. We derive a necessary optimality system for optimal shapes whose state variables approximate desired damage patterns and/or displacement fields

    Greedy vector quantization

    Get PDF
    We investigate the greedy version of the LpL^p-optimal vector quantization problem for an Rd\mathbb{R}^d-valued random vector X ⁣LpX\!\in L^p. We show the existence of a sequence (aN)N1(a_N)_{N\ge 1} such that aNa_N minimizes amin1iN1XaiXaLpa\mapsto\big \|\min_{1\le i\le N-1}|X-a_i|\wedge |X-a|\big\|_{L^p} (LpL^p-mean quantization error at level NN induced by (a1,,aN1,a)(a_1,\ldots,a_{N-1},a)). We show that this sequence produces LpL^p-rate optimal NN-tuples a(N)=(a1,,aN)a^{(N)}=(a_1,\ldots,a_{_N}) (i.e.i.e. the LpL^p-mean quantization error at level NN induced by a(N)a^{(N)} goes to 00 at rate N1dN^{-\frac 1d}). Greedy optimal sequences also satisfy, under natural additional assumptions, the distortion mismatch property: the NN-tuples a(N)a^{(N)} remain rate optimal with respect to the LqL^q-norms, pq<p+dp\le q <p+d. Finally, we propose optimization methods to compute greedy sequences, adapted from usual Lloyd's I and Competitive Learning Vector Quantization procedures, either in their deterministic (implementable when d=1d=1) or stochastic versions.Comment: 31 pages, 4 figures, few typos corrected (now an extended version of an eponym paper to appear in Journal of Approximation
    corecore