23 research outputs found

    A level-set approach to the control of state-constrained McKean-Vlasov equations: application to renewable energy storage and portfolio selection

    Full text link
    We consider the control of McKean-Vlasov dynamics (or mean-field control) with probabilistic state constraints. We rely on a level-set approach which provides a representation of the constrained problem in terms of an unconstrained one with exact penalization and running maximum or integral cost. The method is then extended to the common noise setting. Our work extends (Bokanowski, Picarelli, and Zidani, SIAM J. Control Optim. 54.5 (2016), pp. 2568--2593) and (Bokanowski, Picarelli, and Zidani, Appl. Math. Optim. 71 (2015), pp. 125--163) to a mean-field setting. The reformulation as an unconstrained problem is particularly suitable for the numerical resolution of the problem, that is achieved from an extension of a machine learning algorithm from (Carmona, Lauri{\`e}re, arXiv:1908.01613 to appear in Ann. Appl. Prob., 2019). A first application concerns the storage of renewable electricity in the presence of mean-field price impact and another one focuses on a mean-variance portfolio selection problem with probabilistic constraints on the wealth. We also illustrate our approach for a direct numerical resolution of the primal Markowitz continuous-time problem without relying on duality.Comment: To appear in Numerical Algebra, Control and Optimizatio

    A numerical scheme for stochastic differential equations with distributional drift

    Full text link
    In this paper we present a scheme for the numerical solution of stochastic differential equations (SDEs) with distributional drift. The approximating process, obtained by the scheme, converges in law to the (virtual) solution of the SDE in a general multi-dimensional setting. When we restrict our attention to the case of a one-dimensional SDE we also obtain a rate of convergence in a suitable L1L^1-norm. Moreover, we implement our method in the one-dimensional case, when the drift is obtained as the distributional derivative of a sample path of a fractional Brownian motion. To the best of our knowledge this is the first paper to study (and implement) numerical solutions of SDEs whose drift cannot be expressed as a function of the state.Comment: 35 pages, 8 figure

    Méthodes d'apprentissage automatique pour la résolution de problèmes de contrôle stochastique et d'équations aux dérivées partielles en grande dimension

    No full text
    This thesis studies several machine learning numerical schemes to solve nonlinear PDEs and mean-field control in moderate to high dimension and is divided in two parts. The first part focuses on the resolution of parabolic nonlinear PDEs. We describe a multistep neural network scheme which improves existing methods from the literature. One of our contributions is the study of its approximation error together with the ones of existing methods in the semilinear case where the equation is linear with respect to the second order derivative. By using Lipschitz GroupSort neural networks, we are able to link the error to the number of layers and neurons of the approximating network. We also develop one-step and multistep schemes in the more challenging case of fully nonlinear PDEs, based on Malliavin weights and automatic differentiation. All the numerical schemes are tested on numerical examples to demonstrate their relevance.The second part of this work is dedicated to mean-field control and McKean-Vlasov equations. We prove by probabilistic arguments a rate of convergence for the finite dimensional approximation of a PDE on the Wasserstein space. We then use symmetric DeepSet neural networks to solve symmetric PDEs in high dimension. Hence we are able to approximate numerically mean-field control problems by solving their optimality conditions in the form of a Master Bellman PDE in infinite dimension. We then consider mean-field control with probabilistic state constraints on the law of the controlled state. We represent the problem by an auxiliary unconstrained problem with exact penalisation which can be solved by the modification of an existing brute force deep learning scheme.Cette thèse étudie plusieurs schémas numériques d'apprentissage automatique pour la résolution d'Équations aux Dérivées Partielles non-linéaires (EDPs) et du contrôle à champ moyen en dimension modérée ou grande. Elle est divisée en deux parties. La première partie est consacrée à la résolution d'EDPs paraboliques non-linéaires. Nous décrivons un schéma multistep par réseaux de neurones qui améliore les méthodes existantes et nous étudions son erreur d'approximation ainsi que celle de schémas existants dans le cas semilinéaire où l'équation est linéaire par rapport à la dérivée seconde de la solution. En utilisant des réseaux de neurones lipschitziens de type GroupSort, nous sommes capables de relier l'erreur au nombre de neurones et de couches du réseau utilisé pour l'approximation. Nous développons également des schémas one-step et multistep pour le cas plus délicat des EDPs complétement non-linéaires. Toutes les méthodes sont testées sur des exemples numériques. La seconde partie de ce travail est dédiée au contrôle à champ moyen et aux équations de McKean-Vlasov. Nous prouvons par des arguments probabilistes une vitesse de convergence pour l'approximation en dimension finie d'une EDP sur l'espace de Wasserstein. Nous utilisons alors des réseaux de neurones symétriques DeepSet pour résoudre des EDPs symétriques en grande dimension. Ainsi nous sommes capables d'approcher la solution de problèmes de contrôle à champ moyen à partir de leurs conditions d'optimalité. Nous considérons ensuite le contrôle à champ moyen avec des contraintes d'état probabilistes. Pour cela, nous représentons le problème par un problème auxiliaire sans contraintes qui peut être résolu par une variante d'un schéma existant d'apprentissage profond

    Méthodes d'apprentissage automatique pour la résolution de problèmes de contrôle stochastique et d'équations aux dérivées partielles en grande dimension

    No full text
    Cette thèse étudie plusieurs schémas numériques d'apprentissage automatique pour la résolution d'Équations aux Dérivées Partielles non-linéaires (EDPs) et du contrôle à champ moyen en dimension modérée ou grande. Elle est divisée en deux parties. La première partie est consacrée à la résolution d'EDPs paraboliques non-linéaires. Nous décrivons un schéma multistep par réseaux de neurones qui améliore les méthodes existantes et nous étudions son erreur d'approximation ainsi que celle de schémas existants dans le cas semilinéaire où l'équation est linéaire par rapport à la dérivée seconde de la solution. En utilisant des réseaux de neurones lipschitziens de type GroupSort, nous sommes capables de relier l'erreur au nombre de neurones et de couches du réseau utilisé pour l'approximation. Nous développons également des schémas one-step et multistep pour le cas plus délicat des EDPs complétement non-linéaires. Toutes les méthodes sont testées sur des exemples numériques. La seconde partie de ce travail est dédiée au contrôle à champ moyen et aux équations de McKean-Vlasov. Nous prouvons par des arguments probabilistes une vitesse de convergence pour l'approximation en dimension finie d'une EDP sur l'espace de Wasserstein. Nous utilisons alors des réseaux de neurones symétriques DeepSet pour résoudre des EDPs symétriques en grande dimension. Ainsi nous sommes capables d'approcher la solution de problèmes de contrôle à champ moyen à partir de leurs conditions d'optimalité. Nous considérons ensuite le contrôle à champ moyen avec des contraintes d'état probabilistes. Pour cela, nous représentons le problème par un problème auxiliaire sans contraintes qui peut être résolu par une variante d'un schéma existant d'apprentissage profond.This thesis studies several machine learning numerical schemes to solve nonlinear PDEs and mean-field control in moderate to high dimension and is divided in two parts. The first part focuses on the resolution of parabolic nonlinear PDEs. We describe a multistep neural network scheme which improves existing methods from the literature. One of our contributions is the study of its approximation error together with the ones of existing methods in the semilinear case where the equation is linear with respect to the second order derivative. By using Lipschitz GroupSort neural networks, we are able to link the error to the number of layers and neurons of the approximating network. We also develop one-step and multistep schemes in the more challenging case of fully nonlinear PDEs, based on Malliavin weights and automatic differentiation. All the numerical schemes are tested on numerical examples to demonstrate their relevance.The second part of this work is dedicated to mean-field control and McKean-Vlasov equations. We prove by probabilistic arguments a rate of convergence for the finite dimensional approximation of a PDE on the Wasserstein space. We then use symmetric DeepSet neural networks to solve symmetric PDEs in high dimension. Hence we are able to approximate numerically mean-field control problems by solving their optimality conditions in the form of a Master Bellman PDE in infinite dimension. We then consider mean-field control with probabilistic state constraints on the law of the controlled state. We represent the problem by an auxiliary unconstrained problem with exact penalisation which can be solved by the modification of an existing brute force deep learning scheme

    A level-set approach to the control of state-constrained McKean-Vlasov equations: application to renewable energy storage and portfolio selection

    No full text
    We consider the control of McKean-Vlasov dynamics (or mean-field control) with probabilistic state constraints. We rely on a level-set approach which provides a representation of the constrained problem in terms of an unconstrained one with exact penalization and running maximum or integral cost. The method is then extended to the common noise setting. Our work extends (Bokanowski, Picarelli, and Zidani, SIAM J. Control Optim. 54.5 (2016), pp. 2568–2593) and (Bokanowski, Picarelli, and Zidani, Appl. Math. Optim. 71 (2015), pp. 125–163) to a mean-field setting. The reformulation as an unconstrained problem is particularly suitable for the numerical resolution of the problem, that is achieved from an extension of a machine learning algorithm from (Carmona, Laurière, arXiv:1908.01613 to appear in Ann. Appl. Prob., 2019). A first application concerns the storage of renewable electricity in the presence of mean-field price impact and another one focuses on a mean-variance portfolio selection problem with probabilistic constraints on the wealth. We also illustrate our approach for a direct numerical resolution of the primal Markowitz continuous-time problem without relying on duality

    Neural networks-based backward scheme for fully nonlinear PDEs

    No full text
    International audienceWe propose a numerical method for solving high dimensional fully nonlinear partial differential equations (PDEs). Our algorithm estimates simultaneously by backward time induction the solution and its gradient by multi-layer neural networks, while the Hessian is approximated by automatic differentiation of the gradient at previous step. This methodology extends to the fully nonlinear case the approach recently proposed in \cite{HPW19} for semi-linear PDEs. Numerical tests illustrate the performance and accuracy of our method on several examples in high dimension with nonlinearity on the Hessian term including a linear quadratic control problem with control on the diffusion coefficient, Monge-Ampère equation and Hamilton-Jacobi-Bellman equation in portfolio optimization

    Deep backward multistep schemes for nonlinear PDEs and approximation error analysis

    No full text
    42 pagesWe develop multistep machine learning schemes for solving nonlinear partial differential equations (PDEs) in high dimension. The method is based on probabilistic representation of PDEs by backward stochastic differential equations (BSDEs) and its iterated time discretization. In the case of semilinear PDEs, our algorithm estimates simultaneously by backward induction the solution and its gradient by neural networks through sequential minimizations of suitable quadratic loss functions that are performed by stochastic gradient descent. The approach is extended to the more challenging case of fully nonlinear PDEs, and we propose different approximations of the Hessian of the solution to the PDE, i.e., the Γ\Gamma-component of the BSDE, by combining Malliavin weights and neural networks. Extensive numerical tests are carried out with various examples of semilinear PDEs including viscous Burgers equation and examples of fully nonlinear PDEs like Hamilton-Jacobi-Bellman equations arising in portfolio selection problems with stochastic volatilities, or Monge-Ampère equations in dimension up to 15. The performance and accuracy of our numerical results are compared with some other recent machine learning algorithms in the literature, see \cite{HJE17}, \cite{HPW19}, \cite{BEJ19}, \cite{BBCJN19} and \cite{phawar19}. Furthermore, we provide a rigorous approximation error analysis of the deep backward multistep scheme as well as the deep splitting method for semilinear PDEs, which yields convergence rate in terms of the number of neurons for shallow neural networks
    corecore