14,434 research outputs found

    Quasi-random Monte Carlo application in CGE systematic sensitivity analysis

    Full text link
    The uncertainty and robustness of Computable General Equilibrium models can be assessed by conducting a Systematic Sensitivity Analysis. Different methods have been used in the literature for SSA of CGE models such as Gaussian Quadrature and Monte Carlo methods. This paper explores the use of Quasi-random Monte Carlo methods based on the Halton and Sobol' sequences as means to improve the efficiency over regular Monte Carlo SSA, thus reducing the computational requirements of the SSA. The findings suggest that by using low-discrepancy sequences, the number of simulations required by the regular MC SSA methods can be notably reduced, hence lowering the computational time required for SSA of CGE models.Comment: 7 pages, 6 figures, Submitte

    On the Convergence of the Laplace Approximation and Noise-Level-Robustness of Laplace-based Monte Carlo Methods for Bayesian Inverse Problems

    Get PDF
    The Bayesian approach to inverse problems provides a rigorous framework for the incorporation and quantification of uncertainties in measurements, parameters and models. We are interested in designing numerical methods which are robust w.r.t. the size of the observational noise, i.e., methods which behave well in case of concentrated posterior measures. The concentration of the posterior is a highly desirable situation in practice, since it relates to informative or large data. However, it can pose a computational challenge for numerical methods based on the prior or reference measure. We propose to employ the Laplace approximation of the posterior as the base measure for numerical integration in this context. The Laplace approximation is a Gaussian measure centered at the maximum a-posteriori estimate and with covariance matrix depending on the logposterior density. We discuss convergence results of the Laplace approximation in terms of the Hellinger distance and analyze the efficiency of Monte Carlo methods based on it. In particular, we show that Laplace-based importance sampling and Laplace-based quasi-Monte-Carlo methods are robust w.r.t. the concentration of the posterior for large classes of posterior distributions and integrands whereas prior-based importance sampling and plain quasi-Monte Carlo are not. Numerical experiments are presented to illustrate the theoretical findings.Comment: 50 pages, 11 figure

    A Quantile Variant of the EM Algorithm and Its Applications to Parameter Estimation with Interval Data

    Full text link
    The expectation-maximization (EM) algorithm is a powerful computational technique for finding the maximum likelihood estimates for parametric models when the data are not fully observed. The EM is best suited for situations where the expectation in each E-step and the maximization in each M-step are straightforward. A difficulty with the implementation of the EM algorithm is that each E-step requires the integration of the log-likelihood function in closed form. The explicit integration can be avoided by using what is known as the Monte Carlo EM (MCEM) algorithm. The MCEM uses a random sample to estimate the integral at each E-step. However, the problem with the MCEM is that it often converges to the integral quite slowly and the convergence behavior can also be unstable, which causes a computational burden. In this paper, we propose what we refer to as the quantile variant of the EM (QEM) algorithm. We prove that the proposed QEM method has an accuracy of O(1/K2)O(1/K^2) while the MCEM method has an accuracy of Op(1/K)O_p(1/\sqrt{K}). Thus, the proposed QEM method possesses faster and more stable convergence properties when compared with the MCEM algorithm. The improved performance is illustrated through the numerical studies. Several practical examples illustrating its use in interval-censored data problems are also provided

    Efficient Monte Carlo for high excursions of Gaussian random fields

    Full text link
    Our focus is on the design and analysis of efficient Monte Carlo methods for computing tail probabilities for the suprema of Gaussian random fields, along with conditional expectations of functionals of the fields given the existence of excursions above high levels, b. Na\"{i}ve Monte Carlo takes an exponential, in b, computational cost to estimate these probabilities and conditional expectations for a prescribed relative accuracy. In contrast, our Monte Carlo procedures achieve, at worst, polynomial complexity in b, assuming only that the mean and covariance functions are H\"{o}lder continuous. We also explain how to fine tune the construction of our procedures in the presence of additional regularity, such as homogeneity and smoothness, in order to further improve the efficiency.Comment: Published in at http://dx.doi.org/10.1214/11-AAP792 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Robust and Efficient Uncertainty Quantification and Validation of RFIC Isolation

    Get PDF
    Modern communication and identification products impose demanding constraints on reliability of components. Due to this statistical constraints more and more enter optimization formulations of electronic products. Yield constraints often require efficient sampling techniques to obtain uncertainty quantification also at the tails of the distributions. These sampling techniques should outperform standard Monte Carlo techniques, since these latter ones are normally not efficient enough to deal with tail probabilities. One such a technique, Importance Sampling, has successfully been applied to optimize Static Random Access Memories (SRAMs) while guaranteeing very small failure probabilities, even going beyond 6-sigma variations of parameters involved. Apart from this, emerging uncertainty quantifications techniques offer expansions of the solution that serve as a response surface facility when doing statistics and optimization. To efficiently derive the coefficients in the expansions one either has to solve a large number of problems or a huge combined problem. Here parameterized Model Order Reduction (MOR) techniques can be used to reduce the work load. To also reduce the amount of parameters we identify those that only affect the variance in a minor way. These parameters can simply be set to a fixed value. The remaining parameters can be viewed as dominant. Preservation of the variation also allows to make statements about the approximation accuracy obtained by the parameter-reduced problem. This is illustrated on an RLC circuit. Additionally, the MOR technique used should not affect the variance significantly. Finally we consider a methodology for reliable RFIC isolation using floor-plan modeling and isolation grounding. Simulations show good comparison with measurements

    Out-Of-The_Money Monte Carlo Simulation Option Pricing: the join use of Importance Sampling and Descriptive Sampling

    Get PDF
    As in any Monte Carlo application, simulation option valuation produces imprecise estimates. In such an application, Descriptive Sampling (DS) has proven to be a powerful Variance Reduction Technique. However, this performance deteriorates as the probability of exercising an option decreases. In the case of out of the money options, the solution is to use Importance Sampling (IS). Following this track, the joint use of IS and DS is deserving of attention. Here, we evaluate and compare the benefits of using standard IS method with the joint use of IS and DS. We also investigate the influence of the problem dimensionality in the variance reduction achieved. Although the combination IS+DS showed gains over the standard IS implementation, the benefits in the case of out-of-the-money options were mainly due to the IS effect. On the other hand, the problem dimensionality did not affect the gains. Possible reasons for such results are discussed.
    corecore