31 research outputs found

    Solutions of max-plus linear equations and large deviations

    Full text link
    We generalise the Gartner-Ellis theorem of large deviations theory. Our results allow us to derive large deviation type results in stochastic optimal control from the convergence of generalised logarithmic moment generating functions. They rely on the characterisation of the uniqueness of the solutions of max-plus linear equations. We give an illustration for a simple investment model, in which logarithmic moment generating functions represent risk-sensitive values.Comment: 6 page

    Constant Along Primal Rays Conjugacies and the l0 Pseudonorm

    Get PDF
    The so-called l0 pseudonorm on Rd counts the number of nonzero components of a vector. It is used in sparse optimization, either as criterion or in the constraints, to obtain solutions with few nonzero entries. For such problems, the Fenchel conjugacy fails to provide relevant analysis: indeed, the Fenchel conjugate of the characteristic function of the level sets of the l0 pseudonorm is minus infinity, and the Fenchel biconjugate of the l0 pseudonorm is zero. In this paper, we display a class of conjugacies that are suitable for the l0 pseudonorm. For this purpose, we suppose given a (source) norm on Rd. With this norm, we define, on the one hand, a sequence of so-called coordinate-k norms and, on the other hand, a coupling between Rd and Rd , called Capra (constant along primal rays). Then, we provide formulas for the Capra-conjugate and biconjugate, and for the Capra subdifferentials, of functions of the l0 pseudonorm (hence, in particular, of the l0 pseudonorm itself and of the characteristic functions of its level sets), in terms of the coordinate-k norms. As an application, we provide a new family of lower bounds for the l0 pseudonorm, as a fraction between two norms, the denominator being any norm

    Set optimization - a rather short introduction

    Full text link
    Recent developments in set optimization are surveyed and extended including various set relations as well as fundamental constructions of a convex analysis for set- and vector-valued functions, and duality for set optimization problems. Extensive sections with bibliographical comments summarize the state of the art. Applications to vector optimization and financial risk measures are discussed along with algorithmic approaches to set optimization problems

    Conditions for global minimum through abstract convexity

    Get PDF
    The theory of abstract convexity generalizes ideas of convex analysis by using the notion of global supports and the global definition of subdifferential. In order to apply this theory to optimization, we need to extend subdifferential calculus and separation properties into the area of abstract convexity.Doctor of Philosoph

    Weak Minimizers, Minimizers and Variational Inequalities for set valued Functions. A blooming wreath?

    Full text link
    In the literature, necessary and sufficient conditions in terms of variational inequalities are introduced to characterize minimizers of convex set valued functions with values in a conlinear space. Similar results are proved for a weaker concept of minimizers and weaker variational inequalities. The implications are proved using scalarization techniques that eventually provide original problems, not fully equivalent to the set-valued counterparts. Therefore, we try, in the course of this note, to close the network among the various notions proposed. More specifically, we prove that a minimizer is always a weak minimizer, and a solution to the stronger variational inequality always also a solution to the weak variational inequality of the same type. As a special case we obtain a complete characterization of efficiency and weak efficiency in vector optimization by set-valued variational inequalities and their scalarizations. Indeed this might eventually prove the usefulness of the set-optimization approach to renew the study of vector optimization
    corecore