30 research outputs found

    Optimization in Quasi-Monte Carlo Methods for Derivative Valuation

    No full text
    Computational complexity in financial theory and practice has seen an immense rise recently. Monte Carlo simulation has proved to be a robust and adaptable approach, well suited for supplying numerical solutions to a large class of complex problems. Although Monte Carlo simulation has been widely applied in the pricing of financial derivatives, it has been argued that the need to sample the relevant region as uniformly as possible is very important. This led to the development of quasi-Monte Carlo methods that use deterministic points to minimize the integration error. A major disadvantage of low-discrepancy number generators is that they tend to lose their ability of homogeneous coverage as the dimensionality increases. This thesis develops a novel approach to quasi-Monte Carlo methods to evaluate complex financial derivatives more accurately by optimizing the sample coordinates in such a way so as to minimize the discrepancies that appear when using lowdiscrepancy sequences. The main focus is to develop new methods to, optimize the sample coordinate vector, and to test their performance against existing quasi-Monte Carlo methods in pricing complicated multidimensional derivatives. Three new methods are developed, the Gear, the Simulated Annealing and the Stochastic Tunneling methods. These methods are used to evaluate complex multi-asset financial derivatives (geometric average and rainbow options) for dimensions up to 2000. It is shown that the two stochastic methods, Simulated Annealing and Stochastic Tunneling, perform better than existing quasi-Monte Carlo methods, Faure' and Sobol'. This difference in performance is more evident in higher dimensions, particularly when a low number of points is used in the Monte Carlo simulations. Overall, the Stochastic Tunneling method yields the smallest percentage root mean square relative error and requires less computational time to converge to a global solution, proving to be the most promising method in pricing complex derivativesImperial Users onl

    Failure Bounding And Sensitivity Analysis Applied To Monte Carlo Entry, Descent, And Landing Simulations

    Get PDF
    In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost

    A Study of Biased and Unbiased Stochastic Algorithms for Solving Integral Equations

    Get PDF
    In this paper we propose and analyse a new unbiased stochastic method for solving a class of integral equations, namely the second kind Fredholm integral equations. We study and compare three possible approaches to compute linear functionals of the integral under consideration: i) biased Monte Carlo method based on evaluation of truncated Liouville-Neumann series, ii) transformation of this problem into the problem of computing a finite number of integrals, and iii) unbiased stochastic approach. Five Monte Carlo algorithms for numerical integration have been applied for approach (ii). Error balancing of both stochastic and systematic errors has been discussed and applied during the numerical implementation of the biased algorithms. Extensive numerical experiments have been performed to support the theoretical studies regarding the convergence rate of Monte Carlo methods for numerical integration done in our previous studies. We compare the results obtained by some of the best biased stochastic approaches with the results obtained by the proposed unbiased approach. Conclusions about the applicability and efficiency of the algorithms have been drawn

    Uncertainty analysis of an aviation climate model and an aircraft price model for assessment of environmental effects

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2007.Includes bibliographical references (p. 91-95).Estimating, presenting, and assessing uncertainties are important parts in assessment of a complex system. This thesis focuses on the assessment of uncertainty in the price module and the climate module in the Aviation Environmental Portfolio Management Tool (APMT). The aircraft price module is a part of the Partial Equilibrium Block (PEB) and the climate module is a part of the Benefits Valuation Block (BVB) of the APMT. The PEB estimates a future fleet and flight schedule and evaluates manufacturer costs, operator costs, and consumer surplus. The BVB estimates changes in health and welfare for climate, local air quality, and noise from noise and emissions inventories output from the Aviation Environmental Design Tool (AEDT). The assessment was conducted with various uncertainty assessment and sensitivity analysis methods: the nominal range sensitivity analysis (NRSA), the hybrid Monte Carlo sensitivity analysis, the Monte Carlo regression analysis, the vary-all-but-one Monte Carlo analysis, and the global sensitivity analysis with Sobol' indices and total sensitivity indices. Except the NRSA, all other analysis methods are based on the Monte Carlo simulation with random sampling. All uncertainty assessment methods provided the same ranking of significant variables in both APMT modules. Two or three significant variables are clearly distinguished from other insignificant variables. In the price module, seat coefficients are the most significant parameters, and age is an insignificant factor between input variables of the regression model. In the climate module, statistical analyses showed that climate sensitivity and short-lived RF are most significant variables that contribute the variability of all three outputs. However, the HMC analysis suggested that discount rate is the most sensitive factor in the NPV estimation.(cont.) Comparing the Sobol's indices with the total sensitivity indices showed that there are no significant interactions to change the ranking of significant variables in both modules.by Mina Jun.S.M

    Constructive approaches to quasi-Monte Carlo methods for multiple integration

    Get PDF
    Recently, quasi-Monte Carlo methods have been successfully used for approximating multiple integrals in hundreds of dimensions in mathematical finance, and were significantly more efficient than Monte Carlo methods. To understand the apparent success of quasi-Monte Carlo methods for multiple integration, one popular approach is to study worst-case error bounds in weighted function spaces in which the importance of the variables is moderated by some sequences of weights. Ideally, a family of quasi-Monte Carlo methods in some weighted function space should be strongly tractable. Strong tractability means that the minimal number of quadrature points n needed to reduce the initial error by a factor of ε is bounded by a polynomial in ε⁻¹ independently of the dimension d. Several recent publications show the existence of lattice rules that satisfy the strong tractability error bounds in weighted Korobov spaces of periodic integrands and weighted Sobolev spaces of non-periodic integrands. However, those results were non-constructive and thus give no clues as to how to actually construct these lattice rules. In this thesis, we focus on the construction of quasi-Monte Carlo methods that are strongly tractable. We develop and justify algorithms for the construction of lattice rules that achieve strong tractability error bounds in weighted Korobov and Sobolev spaces. The parameters characterizing these lattice rules are found ‘component-by-component’: the (d + 1)-th components are obtained by successive 1-dimensional searches, with the previous d components kept unchanged. The cost of these algorithms vary from O(nd²) to O(n³d²) operations. With currently available technology, they allow construction of rules easily with values of n up to several million and dimensions d up to several hundred

    Uncertainty analysis of power systems using collocation

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2008.Includes bibliographical references (p. 93-97).The next-generation all-electric ship represents a class of design and control problems in which the system is too large to approach analytically, and even with many conventional computational techniques. Additionally, numerous environmental interactions and inaccurate system model information make uncertainty a necessary consideration. Characterizing systems under uncertainty is essentially a problem of representing the system as a function over a random space. This can be accomplished by sampling the function, where in the case of the electric ship a "sample" is a simulation with uncertain parameters set according to the location of the sample. For systems on the scale of the electric ship, simulation is expensive, so we seek an accurate representation of the system from a minimal number of simulations. To this end, collocation is employed to compute statistical moments, from which sensitivity can be inferred, and to construct surrogate models with which interpolation can be used to propagate PDF's. These techniques are applied to three large-scale electric ship models. The conventional formulation for the sparse grid, a collocation algorithm, is modified to yield improved performance. Theoretical bounds and computational examples are given to support the modification. A dimension-adaptive collocation algorithm is implemented in an unscented Kalman filter, and improvement over extended Kalman and unscented filters is seen in two examples.by Joshua Adam Taylor.S.M

    Aplicación de métodos de Correlación Digital de Imágenes y enfoques probabilísticos en el diseño de soluciones presurizadas elaboradas con materiales compuestos

    Get PDF
    Tesis por compendio de publicaciones[ES]La Ciencia y la Tecnología son herramientas indispensables en la construcción de sociedades modernas. Actualmente se está inmerso en una dinámica que no sólo persigue encontrar soluciones a problemas cotidianos, si no en un esfuerzo que busca el avanzar permanentemente en el conocimiento, para afrontar nuevos retos, buscar nuevas soluciones y mejorar el nivel tecnológico de nuestra sociedad. Es esta filosofía, la que fundamenta la presente Tesis Doctoral. Hace años se hablaba de que en algunos campos se estaban alcanzando los denominados “techos tecnológicos”, pero cuando se estaba cerca de éstos en el campo de la ingeniería se produce una revolución gracias a la aparición de los materiales compuestos. Si bien es cierto que la ingeniería actualmente está dotada de grandes técnicas y herramientas, se tiende a aplicar métodos tradicionales y conocimiento adquirido a las nuevas situaciones. En el ejemplo planteado, esta actitud no siempre arroja buenos resultados, dado el complejo comportamiento y la falta de experiencia en estos nuevos materiales. Este trabajo, en un afán de progresar en la búsqueda de métodos y soluciones más adaptados al contexto actual, relativo a los nuevos materiales, y con el objetivo de encontrar diseños más fiables a la par que económicos, se marca el objetivo inicial de lograr una sinergia con tecnologías propias de la Geomática; concretamente la correlación digital de imágenes. Esta técnica proporcionará un mejor conocimiento del material en su conjunto, a mayores de otras ventajas de índole económica. Inicialmente el trabajo se centra en comparar y aplicar la citada técnica geomática para mejorar la caracterización de materiales compuestos aptos para soluciones presurizadas (recipientes, tuberías, etc). Además, los datos obtenidos permiten caracterizar el comportamiento variable del material, a través de un enfoque de tipo probabilístico. Por otro lado, se adaptan los procesos de cálculo numérico al nuevo enfoque, a la par que se aplican técnicas de análisis sensible en la búsqueda de obtener los parámetros críticos del diseño. También, se avanza en el tratamiento de los resultados; lo que constituye el siguiente escalón evolutivo de la ingeniería. Esto es, pasar de enfoques determinísticos a enfoques probabilísticos, apoyándose en lo que se conoce como Ingeniería Robusta asistida por métodos subrogados para lograr definir procedimientos viables de cara a ser transferidos al ámbito industrial.[EN]Science and Technology are indispensable tools in modern Society. Nowadays we are immersed in a dynamic world of Science and Technology that aims to find solutions to daily problems, to continuously advance knowledge, meet new challenges, find new solutions and to improve Society’s Technological level. This philosophy is the base to the present Ph.D. Years ago, people thought that some Fields were reaching their ‘technological ceilings’ but when this happened in Engineering, there was a new revolution thanks to the creation of composites. Although nowadays, it is true that engineering is supported by great tools and techniques, we still tend to apply traditional methods and adquired knowledge to new situations. In the current work, such traditional methods do not always give reliable results, given the complex behaviour of, and the lack of experience in using these new materials. This work aims to progress by looking for methods and solutions which are better adapted to the current context in Engineering, concerning to composites. The first objective is to achieve a synergy with Geomatic technologies; specifically Digital Image Correlation. This technique will give a better understanding of materials as a whole, as well as other economical advantages. Initially, this work focuses on comparing and applying the cited geomatic technique to improve the characterization of composites appropriate for pressurised solutions (containers, piping, etc). Also the data obtained allows for characterising of the variable behaviour of the material through a probabilistic technique. The numerical calculus processes are adapted to this new technique and at the same time are combined with sensitive analysis techniques to obtain design critical parameters. Also there is an advance in the analysis of the results, that constitutes the next evolutionary step in engineering; that is, to move from a deterministic focus to a probabilistic one. This is supported by what is known as Robust Engineering, assisted by surrogate methods to enable viable procedures to be applied to the industrial environment

    Machine Learning-Based Data and Model Driven Bayesian Uncertanity Quantification of Inverse Problems for Suspended Non-structural System

    Get PDF
    Inverse problems involve extracting the internal structure of a physical system from noisy measurement data. In many fields, the Bayesian inference is used to address the ill-conditioned nature of the inverse problem by incorporating prior information through an initial distribution. In the nonparametric Bayesian framework, surrogate models such as Gaussian Processes or Deep Neural Networks are used as flexible and effective probabilistic modeling tools to overcome the high-dimensional curse and reduce computational costs. In practical systems and computer models, uncertainties can be addressed through parameter calibration, sensitivity analysis, and uncertainty quantification, leading to improved reliability and robustness of decision and control strategies based on simulation or prediction results. However, in the surrogate model, preventing overfitting and incorporating reasonable prior knowledge of embedded physics and models is a challenge. Suspended Nonstructural Systems (SNS) pose a significant challenge in the inverse problem. Research on their seismic performance and mechanical models, particularly in the inverse problem and uncertainty quantification, is still lacking. To address this, the author conducts full-scale shaking table dynamic experiments and monotonic & cyclic tests, and simulations of different types of SNS to investigate mechanical behaviors. To quantify the uncertainty of the inverse problem, the author proposes a new framework that adopts machine learning-based data and model driven stochastic Gaussian process model calibration to quantify the uncertainty via a new black box variational inference that accounts for geometric complexity measure, Minimum Description length (MDL), through Bayesian inference. It is validated in the SNS and yields optimal generalizability and computational scalability
    corecore