105,488 research outputs found

    Variance Analysis for Monte Carlo Integration: A Representation-Theoretic Perspective

    Full text link
    In this report, we revisit the work of Pilleboue et al. [2015], providing a representation-theoretic derivation of the closed-form expression for the expected value and variance in homogeneous Monte Carlo integration. We show that the results obtained for the variance estimation of Monte Carlo integration on the torus, the sphere, and Euclidean space can be formulated as specific instances of a more general theory. We review the related representation theory and show how it can be used to derive a closed-form solution

    Variance analysis for Monte Carlo integration

    Full text link

    Monte Carlo Complexity of Parametric Integration

    Get PDF
    The Monte Carlo complexity of computing integrals depending on a parameter is analyzed for smooth integrands. An optimal algorithm is developed on the basis of a multigrid variance reduction technique. The complexity analysis implies that our algorithm attains a higher convergence rate than any deterministic algorithm. Moreover, because of savings due to computation on multiple grids, this rate is also higher than that of previously developed Monte Carlo algorithms for parametric integration

    Pareto Smoothed Importance Sampling

    Full text link
    Importance weighting is a general way to adjust Monte Carlo integration to account for draws from the wrong distribution, but the resulting estimate can be noisy when the importance ratios have a heavy right tail. This routinely occurs when there are aspects of the target distribution that are not well captured by the approximating distribution, in which case more stable estimates can be obtained by modifying extreme importance ratios. We present a new method for stabilizing importance weights using a generalized Pareto distribution fit to the upper tail of the distribution of the simulated importance ratios. The method, which empirically performs better than existing methods for stabilizing importance sampling estimates, includes stabilized effective sample size estimates, Monte Carlo error estimates and convergence diagnostics.Comment: Major revision: 1) proofs for consistency, finite variance, and asymptotic normality, 2) justification of k<0.7 with theoretical computational complexity analysis, 3) major rewrit

    The marginal likelihood of Structural Time Series Models, with application to the euroareaa nd US NAIRU

    Get PDF
    We propose a simple procedure for evaluating the marginal likelihood in univariate Structural Time Series (STS) models. For this we exploit the statistical properties of STS models and the results in Dickey (1968) to obtain the likelihood function marginally to the variance parameters. This strategy applies under normal-inverted gamma-2 prior distributions for the structural shocks and associated variances. For trend plus noise models such as the local level and the local linear trend, it yields the marginal likelihood by simple or double integration over the (0,1)-support. For trend plus cycle models, we show that marginalizing out the variance parameters greatly improves the accuracy of the Laplace method. We apply this ethodology to the analysis of US and euro area NAIRU.Marginal likelihood, Markov Chain Monte Carlo, unobserved components, bridge sampling, Laplace method, NAIRU

    Long Memory Modelling of Inflation with Stochastic Variance and Structural Breaks

    Get PDF
    We investigate changes in the time series characteristics of postwar U.S. inflation. In a model-based analysis the conditional mean of inflation is specified by a long memory autoregressive fractionally integrated moving average process and the conditional variance is modelled by a stochastic volatility process. We develop a Monte Carlo maximum likelihood method to obtain efficient estimates of the parameters using a monthly data-set of core inflation for which we consider different subsamples of varying size. Based on the new modelling framework and the associated estimation technique, we find remarkable changes in the variance, in the order of integration, in the short memory characteristics and in the volatility of volatility

    Heteroskedastic Factor Vector Autoregressive Estimation of Persistent and Non Persistent Processes Subject to Structural Breaks

    Get PDF
    In the paper the fractionally integrated heteroskedastic factor vec- tor autoregressive (FI-HF-VAR) model is introduced. The proposed approach is characterized by minimal pretesting requirements and sim- plicity of implementation also in very large systems, performing well independently of integration properties and sources of persistence, i.e. deterministic or stochastic, accounting for common features of di¤erent kinds, i.e. common integrated (of the fractional or inte- ger type) or non integrated stochastic factors, also featuring condi- tional heteroskedasticity, and common deterministic break processes. The proposed approach allows for accurate investigation of economic time series, from persistence and copersistence analysis to impulse responses and forecast error variance decomposition. Monte Carlo results strongly support the proposed methodology. Key words: long and short memory, structural breaks, fractionally integrated heteroskedastic factor vector autoregressive model.
    • …
    corecore