233 research outputs found

    Some sensitivity results in stochastic optimal control: A Lagrange multiplier point of view

    Full text link
    In this work we provide a first order sensitivity analysis of some parameterized stochastic optimal control problems. The parameters can be given by random processes. The main tool is the one-to-one correspondence between the adjoint states appearing in a weak form of the stochastic Pontryagin principle and the Lagrange multipliers associated to the state equation

    Entropic Gromov-Wasserstein Distances: Stability, Algorithms, and Distributional Limits

    Full text link
    The Gromov-Wasserstein (GW) distance quantifies discrepancy between metric measure spaces, but suffers from computational hardness. The entropic Gromov-Wasserstein (EGW) distance serves as a computationally efficient proxy for the GW distance. Recently, it was shown that the quadratic GW and EGW distances admit variational forms that tie them to the well-understood optimal transport (OT) and entropic OT (EOT) problems. By leveraging this connection, we derive two notions of stability for the EGW problem with the quadratic or inner product cost. The first stability notion enables us to establish convexity and smoothness of the objective in this variational problem. This results in the first efficient algorithms for solving the EGW problem that are subject to formal guarantees in both the convex and non-convex regimes. The second stability notion is used to derive a comprehensive limit distribution theory for the empirical EGW distance and, under additional conditions, asymptotic normality, bootstrap consistency, and semiparametric efficiency thereof.Comment: 66 pages, 3 figure

    Non-Smooth Optimization by Abs-Linearization in Reflexive Function Spaces

    Get PDF
    Nichtglatte Optimierungsprobleme in reflexiven Banachräumen treten in vielen Anwendungen auf. Häufig wird angenommen, dass alle vorkommenden Nichtdifferenzierbarkeiten durch Lipschitz-stetige Operatoren wie abs, min und max gegeben sind. Bei solchen Problemen kann es sich zum Beispiel um optimale Steuerungsprobleme mit möglicherweise nicht glatten Zielfunktionen handeln, welche durch partielle Differentialgleichungen (PDG) eingeschränkt sind, die ebenfalls nicht glatte Terme enthalten können. Eine effiziente und robuste Lösung erfordert eine Kombination numerischer Simulationen und spezifischer Optimierungsalgorithmen. Lokal Lipschitz-stetige, nichtglatte Nemytzkii-Operatoren, welche direkt in der Problemformulierung auftreten, spielen eine wesentliche Rolle in der Untersuchung der zugrundeliegenden Optimierungsprobleme. In dieser Dissertation werden zwei spezifische Methoden und Algorithmen zur Lösung solcher nichtglatter Optimierungsprobleme in reflexiven Banachräumen vorgestellt und diskutiert. Als erste Lösungsmethode wird in dieser Dissertation die Minimierung von nichtglatten Operatoren in reflexiven Banachräumen mittels sukzessiver quadratischer Überschätzung vorgestellt, SALMIN. Ein neuartiger Optimierungsansatz für Optimierungsprobleme mit nichtglatten elliptischen PDG-Beschränkungen, welcher auf expliziter Strukturausnutzung beruht, stellt die zweite Lösungsmethode dar, SCALi. Das zentrale Merkmal dieser Methoden ist ein geeigneter Umgang mit Nichtglattheiten. Besonderes Augenmerk liegt dabei auf der zugrundeliegenden nichtglatten Struktur des Problems und der effektiven Ausnutzung dieser, um das Optimierungsproblem auf angemessene und effiziente Weise zu lösen.Non-smooth optimization problems in reflexive Banach spaces arise in many applications. Frequently, all non-differentiabilities involved are assumed to be given by Lipschitz-continuous operators such as abs, min and max. For example, such problems can refer to optimal control problems with possibly non-smooth objective functionals constrained by partial differential equations (PDEs) which can also include non-smooth terms. Their efficient as well as robust solution requires numerical simulations combined with specific optimization algorithms. Locally Lipschitz-continuous non-smooth non-linearities described by appropriate Nemytzkii operators which arise directly in the problem formulation play an essential role in the study of the underlying optimization problems. In this dissertation, two specific solution methods and algorithms to solve such non-smooth optimization problems in reflexive Banach spaces are proposed and discussed. The minimization of non-smooth operators in reflexive Banach spaces by means of successive quadratic overestimation is presented as the first solution method, SALMIN. A novel structure exploiting optimization approach for optimization problems with non-smooth elliptic PDE constraints constitutes the second solution method, SCALi. The central feature of these methods is the appropriate handling of non-differentiabilities. Special focus lies on the underlying structure of the problem stemming from the non-smoothness and how it can be effectively exploited to solve the optimization problem in an appropriate and efficient way

    Asymptotic Distributions for Solutions in Stochastic Optimization and Generalized M-Estimation

    Get PDF
    New techniques of local sensitivity analysis in nonsmooth optimization are applied to the problem of determining the asymptotic distribution (generally non-normal) for solutions in stochastic optimization, and generalized M-estimation -- a reformulation of the traditional maximum likelihood problem that allows the introduction of hard constraints

    Variational Analysis Of Composite Optimization

    Get PDF
    The dissertation is devoted to the study of the first- and second-order variational analysis of the composite functions with applications to composite optimization. By considering a fairly general composite optimization problem, our analysis covers numerous classes of optimization problems such as constrained optimization; in particular, nonlinear programming, second-order cone programming and semidefinite programming(SDP). Beside constrained optimization problems our framework covers many important composite optimization problems such as the extended nonlinear programming and eigenvalue optimization problem. In first-order analysis we develop the exact first-order calculus via both subderivative and subdifferential. For the second-order part we develop calculus rules via second-order subderivative (which was a long standing open problem). Furthermore, we establish twice epi-differentiability of composite functions. Then we apply our results to composite optimization problem to obtain first- and second-order order optimality conditions under the weakest constraint qualification, the metric subregularity constraint qualification. Finally we apply our results to verify the super linear convergence in SQP methods for constrained optimization
    corecore