1,465 research outputs found

    Subdifferential of the supremum function: moving back and forth between continuous and non-continuous settings

    Get PDF
    In this paper we establish general formulas for the subdifferential of the pointwise supremum of convex functions, which cover and unify both the compact continuous and the non-compact non-continuous settings. From the non-continuous to the continuous setting, we proceed by a compactification-based approach which leads us to problems having compact index sets and upper semi-continuously indexed mappings, giving rise to new characterizations of the subdifferential of the supremum by means of upper semicontinuous regularized functions and an enlarged compact index set. In the opposite sense, we rewrite the subdifferential of these new regularized functions by using the original data, also leading us to new results on the subdifferential of the supremum. We give two applications in the last section, the first one concerning the nonconvex Fenchel duality, and the second one establishing Fritz-John and KKT conditions in convex semi-infinite programming.Research supported by CONICYT (Fondecyt 1190012 and 1190110), Proyecto/Grant PIA AFB-170001, MICIU of Spain and Universidad de Alicante (Grant Beatriz Galindo BEAGAL 18/00205), and Research Project PGC2018-097960-B-C21 from MICINN, Spain. The research of the third author is also supported by the Australian ARC - Discovery Projects DP 180100602

    Exact Penalization and Necessary Optimality Conditions for Multiobjective Optimization Problems with Equilibrium Constraints

    Get PDF
    A calmness condition for a general multiobjective optimization problem with equilibrium constraints is proposed. Some exact penalization properties for two classes of multiobjective penalty problems are established and shown to be equivalent to the calmness condition. Subsequently, a Mordukhovich stationary necessary optimality condition based on the exact penalization results is obtained. Moreover, some applications to a multiobjective optimization problem with complementarity constraints and a multiobjective optimization problem with weak vector variational inequality constraints are given

    Pseudonormality and a language multiplier theory for constrained optimization

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.Includes bibliographical references (leaves 211-213).Lagrange multipliers are central to analytical and computational studies in linear and non-linear optimization and have applications in a wide variety of fields, including communication, networking, economics, and manufacturing. In the past, the main research in Lagrange multiplier theory has focused on developing general and easily verifiable conditions on the constraint set, called constraint qualifications, that guarantee the existence of Lagrange multipliers for the optimization problem of interest. In this thesis, we present a new development of Lagrange multiplier theory that significantly differs from the classical treatments. Our objective is to generalize, unify, and streamline the theory of constraint qualifications. As a starting point, we derive an enahanced set of necessary optimality conditions of the Fritz John-type, which are stronger than the classical Karush-Kuhn-Tucker conditions. They are also more general in that they apply even when there is a possibly nonconvex abstract set constraint, in addition to smooth equality and inequality constraints. These optimality conditions motivate the introduction of a new condition, called pseudonormality, which emerges as central within the taxonomy of significant characteristics of a constraint set. In particular, pseudonormality unifies and extends the major constraint qualifications. In addition, pseudonormality provides the connecting link between constraint qualifications and exact penalty functions. Our analysis also yields identification of different types of Lagrange multipliers. Under some convexity assumptions, we show that there exists a special Lagrange multiplier vector, called informative, which carries significant sensitivity information regarding the constraints that directly affect the optimal cost change.(cont.) In the second part of the thesis, we extend the theory to nonsmooth problems under convexity assumptions. We introduce another notion of multiplier, called geometric, that is not tied to a specific optimal solution and does not require differentiability of the cost and constraint functions. Using a line of development based on convex analysis, we develop Fritz John-type optimality conditions for problems that do not necessarily have optimal solutions. Through an extended notion of constraint pseudonormality, this development provides an alternative pathway to strong duality results of convex programming. We also introduce special geometric multipliers that carry sensitivity information and show their existence under very general conditions.by Asuman E. Ozdaglar.Ph.D

    Relationships between the stochastic discount factor and the optimal omega ratio

    Get PDF
    The omega ratio is an interesting performance measure because it fo- cuses on both downside losses and upside gains, and nancial markets are re ecting more and more asymmetry and heavy tails. This paper focuses on the omega ratio optimization in general Banach spaces, which applies for both in nite dimensional approaches related to continuous time stochastic pricing models (Black and Scholes, stochastic volatility, etc.) and more classical problems in portfolio selection. New algorithms will be provided, as well as Fritz John-like and Karush-Kuhn-Tucker-like optimality conditions and duality results, despite the fact that omega is neither di¤er- entiable nor convex. The optimality conditions will be applied to the most important pricing models of Financial Mathematics, and it will be shown that the optimal value of omega only depends on the upper and lower bounds of the pricing model stochastic discount factor. In particular, if the stochastic discount factor is unbounded (Black and Scholes, Heston, etc.) then the optimal omega ratio becomes unbounded too (it may tend to in nity), and the introduction of several nancial constraints does not overcome this caveat. The new algorithms and optimality conditions will also apply to optimize omega in static frameworks, and it will be illustrated that both in nite- and nite-dimensional approaches may be useful to this purpose

    The Inflation Technique for Causal Inference with Latent Variables

    Full text link
    The problem of causal inference is to determine if a given probability distribution on observed variables is compatible with some causal structure. The difficult case is when the causal structure includes latent variables. We here introduce the inflation technique\textit{inflation technique} for tackling this problem. An inflation of a causal structure is a new causal structure that can contain multiple copies of each of the original variables, but where the ancestry of each copy mirrors that of the original. To every distribution of the observed variables that is compatible with the original causal structure, we assign a family of marginal distributions on certain subsets of the copies that are compatible with the inflated causal structure. It follows that compatibility constraints for the inflation can be translated into compatibility constraints for the original causal structure. Even if the constraints at the level of inflation are weak, such as observable statistical independences implied by disjoint causal ancestry, the translated constraints can be strong. We apply this method to derive new inequalities whose violation by a distribution witnesses that distribution's incompatibility with the causal structure (of which Bell inequalities and Pearl's instrumental inequality are prominent examples). We describe an algorithm for deriving all such inequalities for the original causal structure that follow from ancestral independences in the inflation. For three observed binary variables with pairwise common causes, it yields inequalities that are stronger in at least some aspects than those obtainable by existing methods. We also describe an algorithm that derives a weaker set of inequalities but is more efficient. Finally, we discuss which inflations are such that the inequalities one obtains from them remain valid even for quantum (and post-quantum) generalizations of the notion of a causal model.Comment: Minor final corrections, updated to match the published version as closely as possibl
    • …
    corecore