1,833 research outputs found

    A Variational Assimilation Method for Satellite and Conventional Data: Development of Basic Model for Diagnosis of Cyclone Systems

    Get PDF
    A summary is presented of the progress toward the completion of a comprehensive diagnostic objective analysis system based upon the calculus of variations. The approach was to first develop the objective analysis subject to the constraints that the final product satisfies the five basic primitive equations for a dry inviscid atmosphere: the two nonlinear horizontal momentum equations, the continuity equation, the hydrostatic equation, and the thermodynamic equation. Then, having derived the basic model, there would be added to it the equations for moist atmospheric processes and the radiative transfer equation

    Solving, Estimating and Selecting Nonlinear Dynamic Economic Models without the Curse of Dimensionality

    Get PDF
    A welfare analysis of a risky policy is impossible within a linear or linearized model and its certainty equivalence property. The presented algorithms are designed as a toolbox for a general model class. The computational challenges are considerable and I concentrate on the numerics and statistics for a simple model of dynamic consumption and labor choice. I calculate the optimal policy and estimate the posterior density of structural parameters and the marginal likelihood within a nonlinear state space model. My approach is even in an interpreted language twenty time faster than the only alternative compiled approach. The model is estimated on simulated data in order to test the routines against known true parameters. The policy function is approximated by Smolyak Chebyshev polynomials and the rational expectation integral by Smolyak Gaussian quadrature. The Smolyak operator is used to extend univariate approximation and integration operators to many dimensions. It reduces the curse of dimensionality from exponential to polynomial growth. The likelihood integrals are evaluated by a Gaussian quadrature and Gaussian quadrature particle filter. The bootstrap or sequential importance resampling particle filter is used as an accuracy benchmark. The posterior is estimated by the Gaussian filter and a Metropolis- Hastings algorithm. I propose a genetic extension of the standard Metropolis-Hastings algorithm by parallel random walk sequences. This improves the robustness of start values and the global maximization properties. Moreover it simplifies a cluster implementation and the random walk variances decision is reduced to only two parameters so that almost no trial sequences are needed. Finally the marginal likelihood is calculated as a criterion for nonnested and quasi-true models in order to select between the nonlinear estimates and a first order perturbation solution combined with the Kalman filter.stochastic dynamic general equilibrium model, Chebyshev polynomials, Smolyak operator, nonlinear state space filter, Curse of Dimensionality, posterior of structural parameters, marginal likelihood

    Approximate tensor-product preconditioners for very high order discontinuous Galerkin methods

    Full text link
    In this paper, we develop a new tensor-product based preconditioner for discontinuous Galerkin methods with polynomial degrees higher than those typically employed. This preconditioner uses an automatic, purely algebraic method to approximate the exact block Jacobi preconditioner by Kronecker products of several small, one-dimensional matrices. Traditional matrix-based preconditioners require O(p2d)\mathcal{O}(p^{2d}) storage and O(p3d)\mathcal{O}(p^{3d}) computational work, where pp is the degree of basis polynomials used, and dd is the spatial dimension. Our SVD-based tensor-product preconditioner requires O(pd+1)\mathcal{O}(p^{d+1}) storage, O(pd+1)\mathcal{O}(p^{d+1}) work in two spatial dimensions, and O(pd+2)\mathcal{O}(p^{d+2}) work in three spatial dimensions. Combined with a matrix-free Newton-Krylov solver, these preconditioners allow for the solution of DG systems in linear time in pp per degree of freedom in 2D, and reduce the computational complexity from O(p9)\mathcal{O}(p^9) to O(p5)\mathcal{O}(p^5) in 3D. Numerical results are shown in 2D and 3D for the advection and Euler equations, using polynomials of degree up to p=15p=15. For many test cases, the preconditioner results in similar iteration counts when compared with the exact block Jacobi preconditioner, and performance is significantly improved for high polynomial degrees pp.Comment: 40 pages, 15 figure

    Ensemble Transport Adaptive Importance Sampling

    Full text link
    Markov chain Monte Carlo methods are a powerful and commonly used family of numerical methods for sampling from complex probability distributions. As applications of these methods increase in size and complexity, the need for efficient methods increases. In this paper, we present a particle ensemble algorithm. At each iteration, an importance sampling proposal distribution is formed using an ensemble of particles. A stratified sample is taken from this distribution and weighted under the posterior, a state-of-the-art ensemble transport resampling method is then used to create an evenly weighted sample ready for the next iteration. We demonstrate that this ensemble transport adaptive importance sampling (ETAIS) method outperforms MCMC methods with equivalent proposal distributions for low dimensional problems, and in fact shows better than linear improvements in convergence rates with respect to the number of ensemble members. We also introduce a new resampling strategy, multinomial transformation (MT), which while not as accurate as the ensemble transport resampler, is substantially less costly for large ensemble sizes, and can then be used in conjunction with ETAIS for complex problems. We also focus on how algorithmic parameters regarding the mixture proposal can be quickly tuned to optimise performance. In particular, we demonstrate this methodology's superior sampling for multimodal problems, such as those arising from inference for mixture models, and for problems with expensive likelihoods requiring the solution of a differential equation, for which speed-ups of orders of magnitude are demonstrated. Likelihood evaluations of the ensemble could be computed in a distributed manner, suggesting that this methodology is a good candidate for parallel Bayesian computations

    Multivariate Shortfall Risk Allocation and Systemic Risk

    Full text link
    The ongoing concern about systemic risk since the outburst of the global financial crisis has highlighted the need for risk measures at the level of sets of interconnected financial components, such as portfolios, institutions or members of clearing houses. The two main issues in systemic risk measurement are the computation of an overall reserve level and its allocation to the different components according to their systemic relevance. We develop here a pragmatic approach to systemic risk measurement and allocation based on multivariate shortfall risk measures, where acceptable allocations are first computed and then aggregated so as to minimize costs. We analyze the sensitivity of the risk allocations to various factors and highlight its relevance as an indicator of systemic risk. In particular, we study the interplay between the loss function and the dependence structure of the components. Moreover, we address the computational aspects of risk allocation. Finally, we apply this methodology to the allocation of the default fund of a CCP on real data.Comment: Code, results and figures can also be consulted at https://github.com/yarmenti/MSR
    • …
    corecore