38 research outputs found

    Review of modern numerical methods for a simple vanilla option pricing problem

    Get PDF
    Option pricing is a very attractive issue of financial engineering and optimization. The problem of determining the fair price of an option arises from the assumptions made under a given financial market model. The increasing complexity of these market assumptions contributes to the popularity of the numerical treatment of option valuation. Therefore, the pricing and hedging of plain vanilla options under the Black–Scholes model usually serve as a bench-mark for the development of new numerical pricing approaches and methods designed for advanced option pricing models. The objective of the paper is to present and compare the methodological concepts for the valuation of simple vanilla options using the relatively modern numerical techniques in this issue which arise from the discontinuous Galerkin method, the wavelet approach and the fuzzy transform technique. A theoretical comparison is accompanied by an empirical study based on the numerical verification of simple vanilla option prices. The resulting numerical schemes represent a particularly effective option pricing tool that enables some features of options that are depend-ent on the discretization of the computational domain as well as the order of the polynomial approximation to be captured better

    Optimal Control for a Class of Infinite Dimensional Systems Involving an LL^\infty-term in the Cost Functional

    Full text link
    An optimal control problem with a time-parameter is considered. The functional to be optimized includes the maximum over time-horizon reached by a function of the state variable, and so an LL^\infty-term. In addition to the classical control function, the time at which this maximum is reached is considered as a free parameter. The problem couples the behavior of the state and the control, with this time-parameter. A change of variable is introduced to derive first and second-order optimality conditions. This allows the implementation of a Newton method. Numerical simulations are developed, for selected ordinary differential equations and a partial differential equation, which illustrate the influence of the additional parameter and the original motivation.Comment: 21 pages, 8 figure

    Novel Monte Carlo Methods for Large-Scale Linear Algebra Operations

    Get PDF
    Linear algebra operations play an important role in scientific computing and data analysis. With increasing data volume and complexity in the Big Data era, linear algebra operations are important tools to process massive datasets. On one hand, the advent of modern high-performance computing architectures with increasing computing power has greatly enhanced our capability to deal with a large volume of data. One the other hand, many classical, deterministic numerical linear algebra algorithms have difficulty to scale to handle large data sets. Monte Carlo methods, which are based on statistical sampling, exhibit many attractive properties in dealing with large volume of datasets, including fast approximated results, memory efficiency, reduced data accesses, natural parallelism, and inherent fault tolerance. In this dissertation, we present new Monte Carlo methods to accommodate a set of fundamental and ubiquitous large-scale linear algebra operations, including solving large-scale linear systems, constructing low-rank matrix approximation, and approximating the extreme eigenvalues/ eigenvectors, across modern distributed and parallel computing architectures. First of all, we revisit the classical Ulam-von Neumann Monte Carlo algorithm and derive the necessary and sufficient condition for its convergence. To support a broad family of linear systems, we develop Krylov subspace Monte Carlo solvers that go beyond the use of Neumann series. New algorithms used in the Krylov subspace Monte Carlo solvers include (1) a Breakdown-Free Block Conjugate Gradient algorithm to address the potential rank deficiency problem occurred in block Krylov subspace methods; (2) a Block Conjugate Gradient for Least Squares algorithm to stably approximate the least squares solutions of general linear systems; (3) a BCGLS algorithm with deflation to gain convergence acceleration; and (4) a Monte Carlo Generalized Minimal Residual algorithm based on sampling matrix-vector products to provide fast approximation of solutions. Secondly, we design a rank-revealing randomized Singular Value Decomposition (R3SVD) algorithm for adaptively constructing low-rank matrix approximations to satisfy application-specific accuracy. Thirdly, we study the block power method on Markov Chain Monte Carlo transition matrices and find that the convergence is actually depending on the number of independent vectors in the block. Correspondingly, we develop a sliding window power method to find stationary distribution, which has demonstrated success in modeling stochastic luminal Calcium release site. Fourthly, we take advantage of hybrid CPU-GPU computing platforms to accelerate the performance of the Breakdown-Free Block Conjugate Gradient algorithm and the randomized Singular Value Decomposition algorithm. Finally, we design a Gaussian variant of Freivalds’ algorithm to efficiently verify the correctness of matrix-matrix multiplication while avoiding undetectable fault patterns encountered in deterministic algorithms

    On an integrated Krylov-ADI Solver for Large-Scale Lyapunov Equations

    Get PDF

    On an integrated Krylov-ADI solver for large-scale Lyapunov equations

    Get PDF
    One of the most computationally expensive steps of the low-rank ADI method for large-scale Lyapunov equations is the solution of a shifted linear system at each iteration. We propose the use of the extended Krylov subspace method for this task. In particular, we illustrate how a single approximation space can be constructed to solve all the shifted linear systems needed to achieve a prescribed accuracy in terms of Lyapunov residual norm. Moreover, we show how to fully merge the two iterative procedures in order to obtain a novel, efcient implementation of the low-rank ADI method, for an important class of equations. Many state-of-the-art algorithms for the shift computation can be easily incorporated into our new scheme, as well. Several numerical results illustrate the potential of our novel procedure when compared to an implementation of the low-rank ADI method based on sparse direct solvers for the shifted linear systems

    Tensor Train Decomposition for solving high-dimensional Mutual Hazard Networks

    Get PDF
    We describe the process of enabling the Mutual Hazard Network model for large data sets, i.e., for high dimensions, by using the Tensor Train decomposition. We first briefly review the Mutual Hazard Network model and explain its limitations when using classical methods. We then introduce the Tensor Train format and explain how to perform required operations in it with a particular emphasis on solving systems of linear equations. Next, we explain how to apply the Tensor Train format to the Mutual Hazard Network. Furthermore, we describe some technical aspects of the software implementation. Finally, we present numerical results of different methods used to solve linear systems which occur in the Mutual Hazard Network model. These methods allow the complexity in the number of events d d to be reduced from O(2d) \mathcal{O}\left( 2^d \right) to O(d3) \mathcal{O}\left( d^3 \right) , thereby enabling the Mutual Hazard Network model to be applied to larger data sets

    The INTERNODES method for applications in contact mechanics and dedicated preconditioning techniques

    Get PDF
    The mortar finite element method is a well-established method for the numerical solution of partial differential equations on domains displaying non-conforming interfaces. The method is known for its application in computational contact mechanics. However, its implementation remains challenging as it relies on geometrical projections and unconventional quadrature rules. The INTERNODES (INTERpolation for NOn-conforming DEcompositionS) method, instead, could overcome the implementation difficulties thanks to flexible interpolation techniques. Moreover, it was shown to be at least as accurate as the mortar method making it a very promising alternative for solving problems in contact mechanics. Unfortunately, in such situations the method requires solving a sequence of ill-conditioned linear systems. In this paper, preconditioning techniques are designed and implemented for the efficient solution of those linear systems

    General-purpose preconditioning for regularized interior point methods

    Get PDF
    In this paper we present general-purpose preconditioners for regularized augmented systems, and their corresponding normal equations, arising from optimization problems. We discuss positive definite preconditioners, suitable for CG and MINRES. We consider “sparsifications" which avoid situations in which eigenvalues of the preconditioned matrix may become complex. Special attention is given to systems arising from the application of regularized interior point methods to linear or nonlinear convex programming problems.</p
    corecore