10,790 research outputs found
Moments of spectral functions: Monte Carlo evaluation and verification
The subject of the present study is the Monte Carlo path-integral evaluation
of the moments of spectral functions. Such moments can be computed by formal
differentiation of certain estimating functionals that are
infinitely-differentiable against time whenever the potential function is
arbitrarily smooth. Here, I demonstrate that the numerical differentiation of
the estimating functionals can be more successfully implemented by means of
pseudospectral methods (e.g., exact differentiation of a Chebyshev polynomial
interpolant), which utilize information from the entire interval . The algorithmic detail that leads to robust numerical
approximations is the fact that the path integral action and not the actual
estimating functional are interpolated. Although the resulting approximation to
the estimating functional is non-linear, the derivatives can be computed from
it in a fast and stable way by contour integration in the complex plane, with
the help of the Cauchy integral formula (e.g., by Lyness' method). An
interesting aspect of the present development is that Hamburger's conditions
for a finite sequence of numbers to be a moment sequence provide the necessary
and sufficient criteria for the computed data to be compatible with the
existence of an inversion algorithm. Finally, the issue of appearance of the
sign problem in the computation of moments, albeit in a milder form than for
other quantities, is addressed.Comment: 13 pages, 2 figure
Efficient Explicit Time Stepping of High Order Discontinuous Galerkin Schemes for Waves
This work presents algorithms for the efficient implementation of
discontinuous Galerkin methods with explicit time stepping for acoustic wave
propagation on unstructured meshes of quadrilaterals or hexahedra. A crucial
step towards efficiency is to evaluate operators in a matrix-free way with
sum-factorization kernels. The method allows for general curved geometries and
variable coefficients. Temporal discretization is carried out by low-storage
explicit Runge-Kutta schemes and the arbitrary derivative (ADER) method. For
ADER, we propose a flexible basis change approach that combines cheap face
integrals with cell evaluation using collocated nodes and quadrature points.
Additionally, a degree reduction for the optimized cell evaluation is presented
to decrease the computational cost when evaluating higher order spatial
derivatives as required in ADER time stepping. We analyze and compare the
performance of state-of-the-art Runge-Kutta schemes and ADER time stepping with
the proposed optimizations. ADER involves fewer operations and additionally
reaches higher throughput by higher arithmetic intensities and hence decreases
the required computational time significantly. Comparison of Runge-Kutta and
ADER at their respective CFL stability limit renders ADER especially beneficial
for higher orders when the Butcher barrier implies an overproportional amount
of stages. Moreover, vector updates in explicit Runge--Kutta schemes are shown
to take a substantial amount of the computational time due to their memory
intensity
The exponentially convergent trapezoidal rule
It is well known that the trapezoidal rule converges geometrically when applied to analytic functions on periodic intervals or the real line. The mathematics and history of this phenomenon are reviewed and it is shown that far from being a curiosity, it is linked with computational methods all across scientific computing, including algorithms related to inverse Laplace transforms, special functions, complex analysis, rational approximation, integral equations, and the computation of functions and eigenvalues of matrices and operators
Solving, Estimating and Selecting Nonlinear Dynamic Economic Models without the Curse of Dimensionality
A welfare analysis of a risky policy is impossible within a linear or linearized model and its certainty equivalence property. The presented algorithms are designed as a toolbox for a general model class. The computational challenges are considerable and I concentrate on the numerics and statistics for a simple model of dynamic consumption and labor choice. I calculate the optimal policy and estimate the posterior density of structural parameters and the marginal likelihood within a nonlinear state space model. My approach is even in an interpreted language twenty time faster than the only alternative compiled approach. The model is estimated on simulated data in order to test the routines against known true parameters. The policy function is approximated by Smolyak Chebyshev polynomials and the rational expectation integral by Smolyak Gaussian quadrature. The Smolyak operator is used to extend univariate approximation and integration operators to many dimensions. It reduces the curse of dimensionality from exponential to polynomial growth. The likelihood integrals are evaluated by a Gaussian quadrature and Gaussian quadrature particle filter. The bootstrap or sequential importance resampling particle filter is used as an accuracy benchmark. The posterior is estimated by the Gaussian filter and a Metropolis- Hastings algorithm. I propose a genetic extension of the standard Metropolis-Hastings algorithm by parallel random walk sequences. This improves the robustness of start values and the global maximization properties. Moreover it simplifies a cluster implementation and the random walk variances decision is reduced to only two parameters so that almost no trial sequences are needed. Finally the marginal likelihood is calculated as a criterion for nonnested and quasi-true models in order to select between the nonlinear estimates and a first order perturbation solution combined with the Kalman filter.stochastic dynamic general equilibrium model, Chebyshev polynomials, Smolyak operator, nonlinear state space filter, Curse of Dimensionality, posterior of structural parameters, marginal likelihood
Quantum Monte Carlo for large chemical systems: Implementing efficient strategies for petascale platforms and beyond
Various strategies to implement efficiently QMC simulations for large
chemical systems are presented. These include: i.) the introduction of an
efficient algorithm to calculate the computationally expensive Slater matrices.
This novel scheme is based on the use of the highly localized character of
atomic Gaussian basis functions (not the molecular orbitals as usually done),
ii.) the possibility of keeping the memory footprint minimal, iii.) the
important enhancement of single-core performance when efficient optimization
tools are employed, and iv.) the definition of a universal, dynamic,
fault-tolerant, and load-balanced computational framework adapted to all kinds
of computational platforms (massively parallel machines, clusters, or
distributed grids). These strategies have been implemented in the QMC=Chem code
developed at Toulouse and illustrated with numerical applications on small
peptides of increasing sizes (158, 434, 1056 and 1731 electrons). Using 10k-80k
computing cores of the Curie machine (GENCI-TGCC-CEA, France) QMC=Chem has been
shown to be capable of running at the petascale level, thus demonstrating that
for this machine a large part of the peak performance can be achieved.
Implementation of large-scale QMC simulations for future exascale platforms
with a comparable level of efficiency is expected to be feasible
Recommended from our members
Comparison of Current Gravity Estimation and Determination Models
This paper will discuss the history of gravity estimation and determination models while analyzing methods that are in development. Some fundamental methods for calculating the gravity field include spherical harmonics solutions, local weighted interpolation, and global point mascon modeling (PMC). Recently, high accuracy measurements have become more accessible, and the requirements for high order geopotential modeling have become more stringent. Interest in irregular bodies, accurate models of the hydrological system, and on-board processing has demanded a comprehensive model that can quickly and accurately compute the geopotential with low memory costs. This trade study of current geopotential modeling techniques will reveal that each modeling technique has a unique use case. It is notable that the spherical harmonics model is relatively accurate but poses a cumbersome inversion problem. PMC and interpolation models, on the other hand, are computationally efficient, but require more research to become robust models with high levels of accuracy. Considerations of the trade study will suggest further research for the point mascon model. The PMC model should be improved through mascon refinement, direct solutions that stem from geodetic measurements, and further validation of the gravity gradient. Finally, the potential for each model to be implemented with parallel computation will be shown to lead to large improvements in computing time while reducing the memory cost for each technique.Aerospace Engineering and Engineering Mechanic
- âŠ