86 research outputs found

    Deep optimal stopping

    Full text link
    In this paper we develop a deep learning method for optimal stopping problems which directly learns the optimal stopping rule from Monte Carlo samples. As such, it is broadly applicable in situations where the underlying randomness can efficiently be simulated. We test the approach on three problems: the pricing of a Bermudan max-call option, the pricing of a callable multi barrier reverse convertible and the problem of optimally stopping a fractional Brownian motion. In all three cases it produces very accurate results in high-dimensional situations with short computing times

    A Deep learning method for optimal stopping problems

    Get PDF
    L’objectiu principal d’aquest treball és complementar els fonaments teòrics i implementar el model de Deep Learning presentat a l’article anomenat “Deep Optimal Stopping”, de Becker et al, publicat al Journal of Machine Learning Research el 2019. El model de Deep Learning és específicament dissenyat per adreçar els problemes d’aturada òptima aprenent la “norma” d’aturada òptima a partir de simulacions de Monte Carlo. Conseqüentment, la seva versatilitat s’estén a diversos escenaris on l’aleatorietat subjacent pot ser simulada efectivament. Aquest treball introdueix la teoria dels problemes d’Aturada Òptima (Optimal Stopping problems), les xarxes neuronals (en concret les Deep Neural Networks) i la relació entre ells. Després, es fa un anàlisi en detall i implementació de l’algorisme proposat per l’article de Deep Optimal Stopping, programat en PyTorch usant Python. La implementació es fa sobre dades sintètiques, per tal de testar el mètode amb la literatura disponible. D’altra banda, la nostra implementació no només usa el model d’actius subjacents de Black-Scholes, sinó que s’estén a models amb salts (Kou i Merton) i al model de Heston. Després, es maqueta breument un cas d’ús amb dades de mercat reals. Son explorades diverses aplicacions interessants pertanyents a la valoració de derivats financers, com les opcions de compra tipus Bermuda, on l’aturada òptima pren un valor crucial en el procés de valoració. La tesi analitza el rendiment de l’algorisme pel que fa a les aplicacions, comparant-lo amb altres mètodes comunament usats per valorar derivats financers. En conclusió, aquest treball contribueix a l’enteniment i les potencials aplicacions a la matemàtica financera de l’algorisme basat en l’article “Deep Optimal Stopping”, i en particular a la valoració de derivats financers.El objetivo principal de este trabajo es complementar los fundamentos teóricos e implementar el modelo de Deep Learning presentado en el artículo nombrada “Deep Optimal Stopping”, de Becker et al, publicado en el Journal of Machine Learning Research el 2019. El modelo de Deep Learning es específicamente diseñado para tratar los problemas de parada óptima aprendiendo la “norma” de parada óptima a partir de simulaciones de Monte Carlo. Consecuentemente, su versatilidad se extiende a diferentes escenarios donde la aleatoriedad subyacente puede ser simulada efectivamente. Este trabajo introduce la teoría de problemas de Parada Óptima (Optimal Stopping problems), las redes neuronales (y, en concreto, las Deep Neural Networks) y la relación entre ellos. Luego, se hace un análisis en detalle e implementación del algoritmo propuesto por el artículo de Deep Optimal Stopping, programado en PyTorch usando Python. La implementación se hace usando datos sintéticos, por tal de testear el método con la literatura disponible. Por otro lado, nuestra implementación no solo usa el modelo de activos subyacentes de Black-Scholes, sino que se extiende a modelos con saltos (Kou y Merton) y al modelo de Heston. Después, se maqueta brevemente un caso de uso con datos de mercado reales. Son exploradas aplicaciones diversas e interesantes pertenecientes a la valoración de derivados financieros, como las opciones de compra tipo Bermuda, donde la parada óptima toma un valor crucial en el proceso de valoración. El trabajo analiza el rendimiento del algoritmo por lo que respecta a las aplicaciones, comparándolo con otros métodos comúnmente usados para valorar derivados financieros. En conclusión, este trabajo contribuye al entendimiento y a las potenciales aplicaciones a la matemática financiera del algoritmo basado en el artículo “Deep Optimal Stopping”, y en particular a la valoración de derivados financieros.The main goal of this thesis is to complement the theoretical foundations and implement the deep learning model presented in the paper titled “Deep Optimal Stopping” by Becker et al, published in the Journal of Machine Learning Research in 2019. The deep learning model is specifically designed to address optimal stopping problems by learning the optimal stopping rule from Monte Carlo samples. Consequently, its versatility extends to various scenarios where the underlying randomness can be simulated effectively. The thesis introduces the theory of Optimal Stopping problems, Deep Neural Networks and their relation. The thesis then provides a detailed analysis and implementation of the Deep Optimal Stopping algorithm in PyTorch using Python. The implementation is done on synthetic data to test the method with the literature. Besides, the implementation provided not only uses the Black-Scholes underlying asset model but is extended to models with jumps (Kou and Merton) and the Heston model. Then a use case with real-world financial data is shortly outlined. Interesting applications pertaining to the valuation of financial derivatives, such as Bermudan call options, are explored, where optimal stopping plays a crucial role in the valuation process. The thesis analyzes the performance of the algorithm on these applications, comparing it to other commonly used methods for valuing financial derivatives. In conclusion, this thesis contributes to the understanding and potential applications of the Deep Optimal Stopping algorithm in mathematical finance, particularly in the valuation of financial derivatives

    A deep learning approach for computations of exposure profiles for high-dimensional Bermudan options

    Get PDF
    In this paper, we propose a neural network-based method for approximating expected exposures and potential future exposures of Bermudan options. In a first phase, the method relies on the Deep Optimal Stopping algorithm, which learns the optimal stopping rule from Monte-Carlo samples of the underlying risk factors. Cashflow-paths are then created by applying the learned stopping strategy on a new set of realizations of the risk factors. Furthermore, in a second phase the risk factors are regressed against the cashflow-paths to obtain approximations of pathwise option values. The regression step is carried out by ordinary least squares as well as neural networks, and it is shown that the latter produces more accurate approximations. The expected exposure is formulated, both in terms of the cashflow-paths and in terms of the pathwise option values and it is shown that a simple Monte-Carlo average yields accurate approximations in both cases. The potential future exposure is estimated by the empirical α\alpha-percentile. Finally, it is shown that the expected exposures, as well as the potential future exposures can be computed under either, the risk neutral measure, or the real world measure, without having to re-train the neural networks

    A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations

    Full text link
    Deep neural networks and other deep learning methods have very successfully been applied to the numerical approximation of high-dimensional nonlinear parabolic partial differential equations (PDEs), which are widely used in finance, engineering, and natural sciences. In particular, simulations indicate that algorithms based on deep learning overcome the curse of dimensionality in the numerical approximation of solutions of semilinear PDEs. For certain linear PDEs this has also been proved mathematically. The key contribution of this article is to rigorously prove this for the first time for a class of nonlinear PDEs. More precisely, we prove in the case of semilinear heat equations with gradient-independent nonlinearities that the numbers of parameters of the employed deep neural networks grow at most polynomially in both the PDE dimension and the reciprocal of the prescribed approximation accuracy. Our proof relies on recently introduced multilevel Picard approximations of semilinear PDEs.Comment: 29 page

    Robust risk aggregation with neural networks

    Full text link
    We consider settings in which the distribution of a multivariate random variable is partly ambiguous. We assume the ambiguity lies on the level of the dependence structure, and that the marginal distributions are known. Furthermore, a current best guess for the distribution, called reference measure, is available. We work with the set of distributions that are both close to the given reference measure in a transportation distance (e.g. the Wasserstein distance), and additionally have the correct marginal structure. The goal is to find upper and lower bounds for integrals of interest with respect to distributions in this set. The described problem appears naturally in the context of risk aggregation. When aggregating different risks, the marginal distributions of these risks are known and the task is to quantify their joint effect on a given system. This is typically done by applying a meaningful risk measure to the sum of the individual risks. For this purpose, the stochastic interdependencies between the risks need to be specified. In practice the models of this dependence structure are however subject to relatively high model ambiguity. The contribution of this paper is twofold: Firstly, we derive a dual representation of the considered problem and prove that strong duality holds. Secondly, we propose a generally applicable and computationally feasible method, which relies on neural networks, in order to numerically solve the derived dual problem. The latter method is tested on a number of toy examples, before it is finally applied to perform robust risk aggregation in a real world instance.Comment: Revised version. Accepted for publication in "Mathematical Finance

    Optimal stopping via deeply boosted backward regression

    Get PDF
    In this note we propose a new approach towards solving numerically optimal stopping problems via boosted regression based Monte Carlo algorithms. The main idea of the method is to boost standard linear regression algorithms in each backward induction step by adding new basis functions based on previously estimated continuation values. The proposed methodology is illustrated by several numerical examples from finance

    A proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients

    Full text link
    In recent years deep artificial neural networks (DNNs) have been successfully employed in numerical simulations for a multitude of computational problems including, for example, object and face recognition, natural language processing, fraud detection, computational advertisement, and numerical approximations of partial differential equations (PDEs). These numerical simulations indicate that DNNs seem to possess the fundamental flexibility to overcome the curse of dimensionality in the sense that the number of real parameters used to describe the DNN grows at most polynomially in both the reciprocal of the prescribed approximation accuracy ε>0 \varepsilon > 0 and the dimension dN d \in \mathbb{N} of the function which the DNN aims to approximate in such computational problems. There is also a large number of rigorous mathematical approximation results for artificial neural networks in the scientific literature but there are only a few special situations where results in the literature can rigorously justify the success of DNNs in high-dimensional function approximation. The key contribution of this paper is to reveal that DNNs do overcome the curse of dimensionality in the numerical approximation of Kolmogorov PDEs with constant diffusion and nonlinear drift coefficients. We prove that the number of parameters used to describe the employed DNN grows at most polynomially in both the PDE dimension dN d \in \mathbb{N} and the reciprocal of the prescribed approximation accuracy ε>0 \varepsilon > 0 . A crucial ingredient in our proof is the fact that the artificial neural network used to approximate the solution of the PDE is indeed a deep artificial neural network with a large number of hidden layers.Comment: 48 page

    Convergence of the Backward Deep BSDE Method with Applications to Optimal Stopping Problems

    Full text link
    The optimal stopping problem is one of the core problems in financial markets, with broad applications such as pricing American and Bermudan options. The deep BSDE method [Han, Jentzen and E, PNAS, 115(34):8505-8510, 2018] has shown great power in solving high-dimensional forward-backward stochastic differential equations (FBSDEs), and inspired many applications. However, the method solves backward stochastic differential equations (BSDEs) in a forward manner, which can not be used for optimal stopping problems that in general require running BSDE backwardly. To overcome this difficulty, a recent paper [Wang, Chen, Sudjianto, Liu and Shen, arXiv:1807.06622, 2018] proposed the backward deep BSDE method to solve the optimal stopping problem. In this paper, we provide the rigorous theory for the backward deep BSDE method. Specifically, 1. We derive the a posteriori error estimation, i.e., the error of the numerical solution can be bounded by the training loss function; and; 2. We give an upper bound of the loss function, which can be sufficiently small subject to universal approximations. We give two numerical examples, which present consistent performance with the proved theory

    Pricing and hedging American-style options with deep learning

    Full text link
    In this paper we introduce a deep learning method for pricing and hedging American-style options. It first computes a candidate optimal stopping policy. From there it derives a lower bound for the price. Then it calculates an upper bound, a point estimate and confidence intervals. Finally, it constructs an approximate dynamic hedging strategy. We test the approach on different specifications of a Bermudan max-call option. In all cases it produces highly accurate prices and dynamic hedging strategies with small replication errors
    corecore