19 research outputs found
Generalized Nash equilibrium problems with partial differential operators: Theory, algorithms, and risk aversion
PDE-constrained (generalized) Nash equilibrium problems (GNEPs) are considered in a deterministic setting as well as under uncertainty. This includes a study of deterministic GNEPs with nonlinear and/or multivalued operator equations as forward problems and PDE-constrained GNEPs with uncertain data. The deterministic nonlinear problems are analyzed using the theory of generalized convexity for set-valued operators, and a variational approximation approach is proposed. The stochastic setting includes a detailed overview of the recently developed theory and algorithms for risk-averse PDE-constrained optimization problems. These new results open the way to a rigorous study of stochastic PDE-constrained GNEPs
Postquantum Br\`{e}gman relative entropies and nonlinear resource theories
We introduce the family of postquantum Br\`{e}gman relative entropies, based
on nonlinear embeddings into reflexive Banach spaces (with examples given by
reflexive noncommutative Orlicz spaces over semi-finite W*-algebras,
nonassociative L spaces over semi-finite JBW-algebras, and noncommutative
L spaces over arbitrary W*-algebras). This allows us to define a class of
geometric categories for nonlinear postquantum inference theory (providing an
extension of Chencov's approach to foundations of statistical inference), with
constrained maximisations of Br\`{e}gman relative entropies as morphisms and
nonlinear images of closed convex sets as objects. Further generalisation to a
framework for nonlinear convex operational theories is developed using a larger
class of morphisms, determined by Br\`{e}gman nonexpansive operations (which
provide a well-behaved family of Mielnik's nonlinear transmitters). As an
application, we derive a range of nonlinear postquantum resource theories
determined in terms of this class of operations.Comment: v2: several corrections and improvements, including an extension to
the postquantum (generally) and JBW-algebraic (specifically) cases, a section
on nonlinear resource theories, and more informative paper's titl
Recommended from our members
Generalized Nash equilibrium problems with partial differential operators: Theory, algorithms, and risk aversion
PDE-constrained (generalized) Nash equilibrium problems (GNEPs) are considered in a deterministic setting as well as under uncertainty. This includes a study of deterministic GNEPs with nonlinear and/or multivalued operator equations as forward problems and PDE-constrained GNEPs with uncertain data. The deterministic nonlinear problems are analyzed using the theory of generalized convexity for set-valued operators, and a variational approximation approach is proposed. The stochastic setting includes a detailed overview of the recently developed theory and algorithms for risk-averse PDE-constrained optimization problems. These new results open the way to a rigorous study of stochastic PDE-constrained GNEPs
First-order primal-dual methods for nonsmooth nonconvex optimisation
We provide an overview of primal-dual algorithms for nonsmooth and
non-convex-concave saddle-point problems. This flows around a new analysis of
such methods, using Bregman divergences to formulate simplified conditions for
convergence
Risk-averse optimal control of random elliptic VIs
We consider a risk-averse optimal control problem governed by an elliptic variational inequality (VI) subject to random inputs. By deriving KKT-type optimality conditions for a penalised and smoothed problem and studying convergence of the stationary points with respect to the penalisation parameter, we obtain two forms of stationarity conditions. The lack of regularity with respect to the uncertain parameters and complexities induced by the presence of the risk measure give rise to new challenges unique to the stochastic setting. We also propose a path-following stochastic approximation algorithm using variance reduction techniques and demonstrate the algorithm on a modified benchmark problem
Sensitivity analysis of elliptic variational inequalities of the first and the second kind
This thesis is concerned with the differential sensitivity analysis of elliptic variational inequalities of the first and the second kind in finite and infinite dimensions.
We develop a general theory that provides a sharp criterion for the Hadamard directional differentiability of the solution operator to an elliptic variational inequality and introduce several tools that facilitate the sensitivity analysis in practical applications.
Our analysis is accompanied by examples from mechanics and fluid dynamics that illustrate the strengths and limitations of the obtained results.
We further establish strong and Bouligand stationarity conditions for optimal control problems governed by elliptic variational inequalities in a general setting that covers, e.g., the situations where the control-to-state mapping is a metric projection or a non-smooth elliptic partial differential equation
Generalized Conditional Gradient with Augmented Lagrangian for Composite Minimization
In this paper we propose a splitting scheme which hybridizes generalized conditional gradient with a prox-imal step which we call CGALP algorithm, for minimizing the sum of three proper convex and lower-semicontinuous functions in real Hilbert spaces. The minimization is subject to an affine constraint, that allows in particular to deal with composite problems (sum of more than three functions) in a separate way by the usual product space technique. While classical conditional gradient methods require Lipschitz-continuity of the gradient of the differentiable part of the objective, CGALP needs only differentiability (on an appropriate subset), hence circumventing the intricate question of Lipschitz continuity of gradients. For the two remaining functions in the objective, we do not require any additional regularity assumption. The second function, possibly nonsmooth, is assumed simple, i.e., the associated proximal mapping is easily computable. For the third function, again nonsmooth, we just assume that its domain is weakly compact and that a linearly perturbed minimization oracle is accessible. In particular, this last function can be chosen to be the indicator of a nonempty bounded closed convex set, in order to deal with additional constraints. Finally, the affine constraint is addressed by the augmented Lagrangian approach. Our analysis is carried out for a wide choice of algorithm parameters satisfying so called "open loop" rules. As main results, under mild conditions, we show asymptotic feasibility with respect to the affine constraint, boundedness of the dual multipliers, and convergence of the Lagrangian values to the saddle-point optimal value. We also provide (subsequential) rates of convergence for both the feasibility gap and the Lagrangian values.Dans ce travail, nous proposons un schéma d’éclatement en optimisation non lisse, hybridant le gradient conditionnel avec une étapeproximale que nous appelons CGALP , pour minimiser la somme de fonctions propres fermées et convexes sur un compact de R n . La minimisationest de plus sujette à une contrainte affine, que nous prenons en compte par un Lagrangien augmenté, en qui permet en particulier de traiter desproblèmes composites à plusieurs fonctions par une technique d’espace produit. Certaines fonctions sont autorisées à être non lisses mais dontl’opérateur proximal est simple à calculer. Notre analyse et garanties de convergence sont assurées pour un large choix de paramètres "en boucleouverte". Comme résultats principaux, nous montrons la faisabilité asymptotique de la variable primale, la convergence de toute sous-suite versune solution du problème primal, la convergence de la variable duale à une solution du problème dual, et la convergence du Lagrangien. Des tauxde convergence sont aussi fournis. Les implications et illustrations de l’algorithme en traitement des données sont discutées
Introduction to Nonsmooth Analysis and Optimization
This book aims to give an introduction to generalized derivative concepts
useful in deriving necessary optimality conditions and numerical algorithms for
infinite-dimensional nondifferentiable optimization problems that arise in
inverse problems, imaging, and PDE-constrained optimization. They cover convex
subdifferentials, Fenchel duality, monotone operators and resolvents,
Moreau--Yosida regularization as well as Clarke and (briefly) limiting
subdifferentials. Both first-order (proximal point and splitting) methods and
second-order (semismooth Newton) methods are treated. In addition,
differentiation of set-valued mapping is discussed and used for deriving
second-order optimality conditions for as well as Lipschitz stability
properties of minimizers. The required background from functional analysis and
calculus of variations is also briefly summarized.Comment: arXiv admin note: substantial text overlap with arXiv:1708.0418
Efficient and Globally Convergent Minimization Algorithms for Small- and Finite-Strain Plasticity Problems
We present efficient and globally convergent solvers for several classes of plasticity models. The models in this work are formulated in the primal form as energetic rate-independent systems with an elastic energy potential and a plastic dissipation component. Different hardening rules are considered, as well as different flow rules. The time discretization leads to a sequence of nonsmooth minimization problems. For small strains, the unknowns live in vector spaces while for finite strains we have to deal with manifold-valued quantities. For the latter, a reformulation in tangent space is performed to end up with the same dissipation functional as in the small-strain case. We present the Newton-type TNNMG solver for convex and nonsmooth minimization problems and a newly developed Proximal Newton (PN) method that can also handle nonconvex problems. The PN method generates a sequence of penalized convex, coercive but nonsmooth subproblems. These subproblems are in the form of block-separable small-strain plasticity problems, to which TNNMG can be applied. Global convergence theorems are available for both methods. In several numerical experiments, both the efficiency and the flexibility of the methods for small-strain and finite-strain models are tested
Linear convergence of accelerated conditional gradient algorithms in spaces of measures
A class of generalized conditional gradient algorithms for the solution of
optimization problem in spaces of Radon measures is presented. The method
iteratively inserts additional Dirac-delta functions and optimizes the
corresponding coefficients. Under general assumptions, a sub-linear
rate in the objective functional is obtained, which is sharp
in most cases. To improve efficiency, one can fully resolve the
finite-dimensional subproblems occurring in each iteration of the method. We
provide an analysis for the resulting procedure: under a structural assumption
on the optimal solution, a linear convergence rate is
obtained locally.Comment: 30 pages, 7 figure