1,200 research outputs found

    Survey and Evaluate Uncertainty Quantification Methodologies

    Full text link

    Numerical studies of space filling designs: optimization of Latin Hypercube Samples and subprojection properties

    Get PDF
    International audienceQuantitative assessment of the uncertainties tainting the results of computer simulations is nowadays a major topic of interest in both industrial and scientific communities. One of the key issues in such studies is to get information about the output when the numerical simulations are expensive to run. This paper considers the problem of exploring the whole space of variations of the computer model input variables in the context of a large dimensional exploration space. Various properties of space filling designs are justified: interpoint-distance, discrepancy, minimum spanning tree criteria. A specific class of design, the optimized Latin Hypercube Sample, is considered. Several optimization algorithms, coming from the literature, are studied in terms of convergence speed, robustness to subprojection and space filling properties of the resulting design. Some recommendations for building such designs are given. Finally, another contribution of this paper is the deep analysis of the space filling properties of the design 2D-subprojections

    A Study of Adaptation Mechanisms for Simulation Algorithms

    Get PDF
    The performance of a program can sometimes greatly improve if it was known in advance the features of the input the program is supposed to process, the actual operating parameters it is supposed to work with, or the specific environment it is to run on. However, this information is typically not available until too late in the program’s operation to take advantage of it. This is especially true for simulation algorithms, which are sensitive to this late-arriving information, and whose role in the solution of decision-making, inference and valuation problems is crucial. To overcome this limitation we need to provide the flexibility for a program to adapt its behaviour to late-arriving information once it becomes available. In this thesis, I study three adaptation mechanisms: run-time code generation, model-specific (quasi) Monte Carlo sampling and dynamic computation offloading, and evaluate their benefits on Monte Carlo algorithms. First, run-time code generation is studied in the context of Monte Carlo algorithms for time-series filtering in the form of the Input-Adaptive Kalman filter, a dynamically generated state estimator for non-linear, non-Gaussian dynamic systems. The second adaptation mechanism consists of the application of the functional-ANOVA decomposition to generate model-specific QMC-samplers which can then be used to improve Monte Carlo-based integration. The third adaptive mechanism treated here, dynamic computation offloading, is applied to wireless communication management, where network conditions are assessed via option valuation techniques to determine whether a program should offload computations or carry them out locally in order to achieve higher run-time (and correspondingly battery-usage) efficiency. This ability makes the program well suited for operation in mobile environments. At their core, all these applications carry out or make use of (quasi) Monte Carlo simulations on dynamic Bayesian networks (DBNs). The DBN formalism and its associated simulation-based algorithms are of great value in the solution to problems with a large uncertainty component. This characteristic makes adaptation techniques like those studied here likely to gain relevance in a world where computers are endowed with perception capabilities and are expected to deal with an ever-increasing stream of sensor and time-series data

    Research and Education in Computational Science and Engineering

    Get PDF
    Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie

    Risk-Sensitive Reinforcement Learning: A Constrained Optimization Viewpoint

    Full text link
    The classic objective in a reinforcement learning (RL) problem is to find a policy that minimizes, in expectation, a long-run objective such as the infinite-horizon discounted or long-run average cost. In many practical applications, optimizing the expected value alone is not sufficient, and it may be necessary to include a risk measure in the optimization process, either as the objective or as a constraint. Various risk measures have been proposed in the literature, e.g., mean-variance tradeoff, exponential utility, the percentile performance, value at risk, conditional value at risk, prospect theory and its later enhancement, cumulative prospect theory. In this article, we focus on the combination of risk criteria and reinforcement learning in a constrained optimization framework, i.e., a setting where the goal to find a policy that optimizes the usual objective of infinite-horizon discounted/average cost, while ensuring that an explicit risk constraint is satisfied. We introduce the risk-constrained RL framework, cover popular risk measures based on variance, conditional value-at-risk and cumulative prospect theory, and present a template for a risk-sensitive RL algorithm. We survey some of our recent work on this topic, covering problems encompassing discounted cost, average cost, and stochastic shortest path settings, together with the aforementioned risk measures in a constrained framework. This non-exhaustive survey is aimed at giving a flavor of the challenges involved in solving a risk-sensitive RL problem, and outlining some potential future research directions

    Quantum computing for finance

    Full text link
    Quantum computers are expected to surpass the computational capabilities of classical computers and have a transformative impact on numerous industry sectors. We present a comprehensive summary of the state of the art of quantum computing for financial applications, with particular emphasis on stochastic modeling, optimization, and machine learning. This Review is aimed at physicists, so it outlines the classical techniques used by the financial industry and discusses the potential advantages and limitations of quantum techniques. Finally, we look at the challenges that physicists could help tackle

    Data Analysis and Experimental Design for Accelerated Life Testing with Heterogeneous Group Effects

    Get PDF
    abstract: In accelerated life tests (ALTs), complete randomization is hardly achievable because of economic and engineering constraints. Typical experimental protocols such as subsampling or random blocks in ALTs result in a grouped structure, which leads to correlated lifetime observations. In this dissertation, generalized linear mixed model (GLMM) approach is proposed to analyze ALT data and find the optimal ALT design with the consideration of heterogeneous group effects. Two types of ALTs are demonstrated for data analysis. First, constant-stress ALT (CSALT) data with Weibull failure time distribution is modeled by GLMM. The marginal likelihood of observations is approximated by the quadrature rule; and the maximum likelihood (ML) estimation method is applied in iterative fashion to estimate unknown parameters including the variance component of random effect. Secondly, step-stress ALT (SSALT) data with random group effects is analyzed in similar manner but with an assumption of exponentially distributed failure time in each stress step. Two parameter estimation methods, from the frequentist’s and Bayesian points of view, are applied; and they are compared with other traditional models through simulation study and real example of the heterogeneous SSALT data. The proposed random effect model shows superiority in terms of reducing bias and variance in the estimation of life-stress relationship. The GLMM approach is particularly useful for the optimal experimental design of ALT while taking the random group effects into account. In specific, planning ALTs under nested design structure with random test chamber effects are studied. A greedy two-phased approach shows that different test chamber assignments to stress conditions substantially impact on the estimation of unknown parameters. Then, the D-optimal test plan with two test chambers is constructed by applying the quasi-likelihood approach. Lastly, the optimal ALT planning is expanded for the case of multiple sources of random effects so that the crossed design structure is also considered, along with the nested structure.Dissertation/ThesisDoctoral Dissertation Industrial Engineering 201

    Design and optimization under uncertainty of Energy Systems

    Get PDF
    In many engineering design and optimisation problems, the presence of uncertainty in data and parameters is a central and critical issue. The analysis and design of advanced complex energy systems is generally performed starting from a single operating condition and assuming a series of design and operating parameters as fixed values. However, many of the variables on which the design is based are subject to uncertainty because they are not determinable with an adequate precision and they can affect both performance and cost. Uncertainties stem naturally from our limitations in measurements, predictions and manufacturing, and we can say that any system used in engineering is subject to some degree of uncertainty. Different fields of engineering use different ways to describe this uncertainty and adopt a variety of techniques to approach the problem. The past decade has seen a significant growth of research and development in uncertainty quantification methods to analyse the propagation of uncertain inputs through the systems. One of the main challenges in this field are identifying sources of uncertainty that potentially affect the outcomes and the efficiency in propagating these uncertainties from the sources to the quantities of interest, especially when there are many sources of uncertainties. Hence, the level of rigor in uncertainty analysis depends on the quality of uncertainty quantification method. The main obstacle of this analysis is often the computational effort, because the representative model is typically highly non-linear and complex. Therefore, it is necessary to have a robust tool that can perform the uncertainty propagation through a non-intrusive approach with as few evaluations as possible. The primary goal of this work is to show a robust method for uncertainty quantification applied to energy systems. The first step in this direction was made doing a work on the analysis of uncertainties on a recuperator for micro gas turbines, making use of the Monte Carlo and Response Sensitivity Analysis methodologies to perform this study. However, when considering more complex energy systems, one of the main weaknesses of uncertainty quantification methods arises: the extremely high computational effort needed. For this reason, the application of a so-called metamodel was found necessary and useful. This approach was applied to perform a complete analysis under uncertainty of a solid oxide fuel cell hybrid system, starting from the evaluation of the impact of several uncertainties on the system up to a robust design including a multi-objective optimization. The response surfaces have allowed the authors to consider the uncertainties in the system when performing an acceptable number of simulations. These response were then used to perform a Monte Carlo simulation to evaluate the impact of the uncertainties on the monitored outputs, giving an insight on the spread of the resulting probability density functions and so on the outputs which should be considered more carefully during the design phase. Finally, the analysis of a complex combined cycle with a flue gas condesing heat pump subject to market uncertainties was performed. To consider the uncertainties in the electrical price, which would impact directly the revenues of the system, a statistical study on the behaviour of such price along the years was performed. From the data obtained it was possible to create a probability density function for each hour of the day which would represent its behaviour, and then those distributions were used to analyze the variability of the system in terms of revenues and emissions

    Numerical Algorithms For Stock Option Valuation

    Get PDF
    Since the formulation by Black, Scholes, and Merton in 1973 of the first rational option pricing formula which depended only on observable values, the volume of options traded daily on the Chicago Board of Exchange has grown rapidly. In fact, over the past three decades, options have undergone a transformation from specialized and obscure securities to ubiquitous components of the portfolios of not only large fund managers, but of ordinary individual investors. Essential ingredients of any successful modern investment strategy include the ability to generate income streams and reduce risk, as well as some level of speculation, all of which can be accomplished by effective use of options.Naturally practitioners require an accurate method of pricing options. Furthermore, because today's market conditions evolve very rapidly, they also need to be able to obtain the price estimates quickly. This dissertation is devoted primarily to improving the efficiency of popular valuation procedures for stock options. In particular, we develop a method of simulating values of European stock options under the Heston stochastic volatility model in a fraction of the time required by the existing method. We also develop an efficient method of simulating the values of American stock option values under the same dynamic in conjunction with the Least-Squares Monte Carlo (LSM) algorithm. We attempt to improve the efficiency of the LSM algorithm by utilizing quasi-Monte Carlo techniques and spline methodology. We also consider optimal investor behavior and consider the notion of option trading as opposed to the much more commonly studied valuation problems
    • …
    corecore