283 research outputs found

    Numerical Algorithms For Stock Option Valuation

    Get PDF
    Since the formulation by Black, Scholes, and Merton in 1973 of the first rational option pricing formula which depended only on observable values, the volume of options traded daily on the Chicago Board of Exchange has grown rapidly. In fact, over the past three decades, options have undergone a transformation from specialized and obscure securities to ubiquitous components of the portfolios of not only large fund managers, but of ordinary individual investors. Essential ingredients of any successful modern investment strategy include the ability to generate income streams and reduce risk, as well as some level of speculation, all of which can be accomplished by effective use of options.Naturally practitioners require an accurate method of pricing options. Furthermore, because today's market conditions evolve very rapidly, they also need to be able to obtain the price estimates quickly. This dissertation is devoted primarily to improving the efficiency of popular valuation procedures for stock options. In particular, we develop a method of simulating values of European stock options under the Heston stochastic volatility model in a fraction of the time required by the existing method. We also develop an efficient method of simulating the values of American stock option values under the same dynamic in conjunction with the Least-Squares Monte Carlo (LSM) algorithm. We attempt to improve the efficiency of the LSM algorithm by utilizing quasi-Monte Carlo techniques and spline methodology. We also consider optimal investor behavior and consider the notion of option trading as opposed to the much more commonly studied valuation problems

    Distribution of Random Streams for Simulation Practitioners

    Get PDF
    International audienceThere is an increasing interest in the distribution of parallel random number streamsin the high-performance computing community particularly, with the manycore shift. Even ifwe have at our disposal statistically sound random number generators according to the latestand thorough testing libraries, their parallelization can still be a delicate problem. Indeed, aset of recent publications shows it still has to be mastered by the scientific community. Withthe arrival of multi-core and manycore processor architectures on the scientist desktop, modelerswho are non-specialists in parallelizing stochastic simulations need help and advice in distributingrigorously their experimental plans and replications according to the state of the art in pseudo-random numbers parallelization techniques. In this paper, we discuss the different partitioningtechniques currently in use to provide independent streams with their corresponding software. Inaddition to the classical approaches in use to parallelize stochastic simulations on regular processors,this paper also presents recent advances in pseudo-random number generation for general-purposegraphical processing units. The state of the art given in this paper is written for simulationpractitioners

    The Iray Light Transport Simulation and Rendering System

    Full text link
    While ray tracing has become increasingly common and path tracing is well understood by now, a major challenge lies in crafting an easy-to-use and efficient system implementing these technologies. Following a purely physically-based paradigm while still allowing for artistic workflows, the Iray light transport simulation and rendering system allows for rendering complex scenes by the push of a button and thus makes accurate light transport simulation widely available. In this document we discuss the challenges and implementation choices that follow from our primary design decisions, demonstrating that such a rendering system can be made a practical, scalable, and efficient real-world application that has been adopted by various companies across many fields and is in use by many industry professionals today

    Haar Wavelets-Based Methods for Credit Risk Portfolio Modeling

    Get PDF
    In this dissertation we have investigated the credit risk measurement of a credit portfolio by means of the wavelets theory. Banks became subject to regulatory capital requirements under Basel Accords and also to the supervisory review process of capital adequacy, this is the economic capital. Concentration risks in credit portfolios arise from an unequal distribution of loans to single borrowers (name concentration) or different industry or regional sectors (sector concentration) and may lead banks to face bankruptcy. The Merton model is the basis of the Basel II approach, it is a Gaussian one-factor model such that default events are driven by a latent common factor that is assumed to follow the Gaussian distribution. Under this model, loss only occurs when an obligor defaults in a fixed time horizon. If we assume certain homogeneity conditions, this one-factor model leads to a simple analytical asymptotic approximation of the loss distribution function and VaR. The VaR value at a high confidence level is the measure chosen in Basel II to calculate regulatory capital. This approximation, usually called Asymptotic Single Risk Factor model (ASRF), works well for a large number of small exposures but can underestimates risks in the presence of exposure concentrations. Then, the ASRF model does not provide an appropriate quantitative framework for the computation of economic capital. Monte Carlo simulation is a standard method for measuring credit portfolio risk in order to deal with concentration risks. However, this method is very time consuming when the size of the portfolio increases, making the computation unworkable in many situations. In summary, credit risk managers are interested in how can concentration risk be quantified in short times and how can the contributions of individual transactions to the total risk be computed. Since the loss variable can take only a finite number of discrete values, the cumulative distribution function (CDF) is discontinuous and then the Haar wavelets are particularly well-suited for this stepped-shape functions. For this reason, we have developed a new method for numerically inverting the Laplace transform of the density function, once we have approximated the CDF by a finite sum of Haar wavelet basis functions. Wavelets are used in mathematical analysis to denote a kind of orthonormal basis with remarkable approximation properties. The difference between the usual sine wave and a wavelet may be described by the localization property, while the sine wave is localized in frequency domain but not in time domain, a wavelet is localized in both, frequency and time domain. Once the CDF has been computed, we are able to calculate the VaR at a high loss level. Furthermore, we have computed also the Expected Shortfall (ES), since VaR is not a coherent risk measure in the sense that it is not sub-additive. We have shown that, in a wide variety of portfolios, these measures are fast and accurately computed with a relative error lower than 1% when compared with Monte Carlo. We have also extended this methodology to the estimation of the risk contributions to the VaR and the ES, by taking partial derivatives with respect to the exposures, obtaining again high accuracy. Some technical improvements have also been implemented in the computation of the Gauss-Hermite integration formula in order to get the coefficients of the approximation, making the method faster while the accuracy remains. Finally, we have extended the wavelet approximation method to the multi-factor setting by means of Monte Carlo and quasi-Monte Carlo methods

    Efficient Pricing of High-Dimensional American-Style Derivatives: A Robust Regression Monte Carlo Method

    Get PDF
    Pricing high-dimensional American-style derivatives is still a challenging task, as the complexity of numerical methods for solving the underlying mathematical problem rapidly grows with the number of uncertain factors. We tackle the problem of developing efficient algorithms for valuing these complex financial products in two ways. In the first part of this thesis we extend the important class of regression-based Monte Carlo methods by our Robust Regression Monte Carlo (RRM) method. The key idea of our proposed approach is to fit the continuation value at every exercise date by robust regression rather than by ordinary least squares; we are able to get a more accurate approximation of the continuation value due to taking outliers in the cross-sectional data into account. In order to guarantee an efficient implementation of our RRM method, we suggest a new Newton-Raphson-based solver for robust regression with very good numerical properties. We use techniques of the statistical learning theory to prove the convergence of our RRM estimator. To test the numerical efficiency of our method, we price Bermudan options on up to thirty assets. It turns out that our RRM approach shows a remarkable convergence behavior; we get speed-up factors of up to over four compared with the state-of-the-art Least Squares Monte Carlo (LSM) method proposed by Longstaff and Schwartz (2001). In the second part of this thesis we focus our attention on variance reduction techniques. At first, we propose a change of drift technique to drive paths in regions which are more important for variance and discuss an efficient implementation of our approach. Regression-based Monte Carlo methods might be combined with the Andersen-Broadie (AB) method (2004) for calculating lower and upper bounds for the true option value; we extend our ideas to the AB approach and our technique leads to speed-up factors of up to over twenty. Secondly, we research the effect of using quasi-Monte Carlo techniques for producing lower and upper bounds by the AB approach combined with the LSM method and our RRM method. In our study, efficiency has high priority and we are able to accelerate the calculation of bounds by factors of up to twenty. Moreover, we suggest some simple but yet powerful acceleration techniques; we research the effect of replacing the double precision procedure for the exponential function and introduce a modified version of the AB approach. We conclude this thesis by combining the most promising approaches proposed in this thesis, and, compared with the state-of-the-art AB method combined with the LSM method, it turns out that our ultimate algorithm shows a remarkable performance; speed-up factors of up to over sixty are quite possible

    Fuzzy-based Propagation of Prior Knowledge to Improve Large-Scale Image Analysis Pipelines

    Get PDF
    Many automatically analyzable scientific questions are well-posed and offer a variety of information about the expected outcome a priori. Although often being neglected, this prior knowledge can be systematically exploited to make automated analysis operations sensitive to a desired phenomenon or to evaluate extracted content with respect to this prior knowledge. For instance, the performance of processing operators can be greatly enhanced by a more focused detection strategy and the direct information about the ambiguity inherent in the extracted data. We present a new concept for the estimation and propagation of uncertainty involved in image analysis operators. This allows using simple processing operators that are suitable for analyzing large-scale 3D+t microscopy images without compromising the result quality. On the foundation of fuzzy set theory, we transform available prior knowledge into a mathematical representation and extensively use it enhance the result quality of various processing operators. All presented concepts are illustrated on a typical bioimage analysis pipeline comprised of seed point detection, segmentation, multiview fusion and tracking. Furthermore, the functionality of the proposed approach is validated on a comprehensive simulated 3D+t benchmark data set that mimics embryonic development and on large-scale light-sheet microscopy data of a zebrafish embryo. The general concept introduced in this contribution represents a new approach to efficiently exploit prior knowledge to improve the result quality of image analysis pipelines. Especially, the automated analysis of terabyte-scale microscopy data will benefit from sophisticated and efficient algorithms that enable a quantitative and fast readout. The generality of the concept, however, makes it also applicable to practically any other field with processing strategies that are arranged as linear pipelines.Comment: 39 pages, 12 figure

    CryptOpt: Verified Compilation with Random Program Search for Cryptographic Primitives

    Full text link
    Most software domains rely on compilers to translate high-level code to multiple different machine languages, with performance not too much worse than what developers would have the patience to write directly in assembly language. However, cryptography has been an exception, where many performance-critical routines have been written directly in assembly (sometimes through metaprogramming layers). Some past work has shown how to do formal verification of that assembly, and other work has shown how to generate C code automatically along with formal proof, but with consequent performance penalties vs. the best-known assembly. We present CryptOpt, the first compilation pipeline that specializes high-level cryptographic functional programs into assembly code significantly faster than what GCC or Clang produce, with mechanized proof (in Coq) whose final theorem statement mentions little beyond the input functional program and the operational semantics of x86-64 assembly. On the optimization side, we apply randomized search through the space of assembly programs, with repeated automatic benchmarking on target CPUs. On the formal-verification side, we connect to the Fiat Cryptography framework (which translates functional programs into C-like IR code) and extend it with a new formally verified program-equivalence checker, incorporating a modest subset of known features of SMT solvers and symbolic-execution engines. The overall prototype is quite practical, e.g. producing new fastest-known implementations for the relatively new Intel i9 12G, of finite-field arithmetic for both Curve25519 (part of the TLS standard) and the Bitcoin elliptic curve secp256k1
    • …
    corecore