6,503 research outputs found

    Status and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond

    Full text link
    In this and a set of companion whitepapers, the USQCD Collaboration lays out a program of science and computing for lattice gauge theory. These whitepapers describe how calculation using lattice QCD (and other gauge theories) can aid the interpretation of ongoing and upcoming experiments in particle and nuclear physics, as well as inspire new ones.Comment: 44 pages. 1 of USQCD whitepapers

    Monte Carlo domain decomposition for robust nuclear reactor analysis

    Get PDF
    Monte Carlo (MC) neutral particle transport codes are considered the gold-standard for nuclear simulations, but they cannot be robustly applied to high-fidelity nuclear reactor analysis without accommodating several terabytes of materials and tally data. While this is not a large amount of aggregate data for a typical high performance computer, MC methods are only embarrassingly parallel when the key data structures are replicated for each processing element, an approach which is likely infeasible on future machines. The present work explores the use of spatial domain decomposition to make full-scale nuclear reactor simulations tractable with Monte Carlo methods, presenting a simple implementation in a production-scale code. Good performance is achieved for mesh-tallies of up to 2.39 TB distributed across 512 compute nodes while running a full-core reactor benchmark on the Mira Blue Gene/Q supercomputer at the Argonne National Laboratory. In addition, the effects of load imbalances are explored with an updated performance model that is empirically validated against observed timing results. Several load balancing techniques are also implemented to demonstrate that imbalances can be largely mitigated, including a new and efficient way to distribute extra compute resources across finer domain meshes.United States. Dept. of Energy. Center for Exascale Simulation of Advanced Reactor

    Pricing options and computing implied volatilities using neural networks

    Full text link
    This paper proposes a data-driven approach, by means of an Artificial Neural Network (ANN), to value financial options and to calculate implied volatilities with the aim of accelerating the corresponding numerical methods. With ANNs being universal function approximators, this method trains an optimized ANN on a data set generated by a sophisticated financial model, and runs the trained ANN as an agent of the original solver in a fast and efficient way. We test this approach on three different types of solvers, including the analytic solution for the Black-Scholes equation, the COS method for the Heston stochastic volatility model and Brent's iterative root-finding method for the calculation of implied volatilities. The numerical results show that the ANN solver can reduce the computing time significantly

    Physics and Computer Architecture Informed Improvements to the Implicit Monte Carlo Method

    Get PDF
    The Implicit Monte Carlo (IMC) method has been a standard method for thermal radiative transfer for the past 40 years. In this time, the hydrodynamics methods that are coupled to IMC have evolved and improved, as have the supercomputers used to run large simulations with IMC. Several modern hydrodynamics methods use unstructured non-orthogonal meshes and high-order spatial discretizations. The IMC method has been used primarily with simple Cartesian meshes and always has a ļ¬rst order spatial discretization. Supercomputers are now made up of compute nodes that have a large number of cores. Current IMC parallel methods have signiļ¬cant problems with load imbalance. To utilize many core systems, algorithms must move beyond simple spatial decomposition parallel algorithms. To make IMC better suited for large scale multiphysics simulations in high energy density physics, new spatial discretizations and parallel strategies are needed. Several modiļ¬cations are made to the IMC method to facilitate running on node-centered, unstructured tetrahedral meshes. These modiļ¬cations produce results that converge to the expected solution under mesh reļ¬nement. A new ļ¬nite element IMC method is also explored on these meshes, which oļ¬€er a simulation runtime beneļ¬t but does not perform correctly in the diļ¬€usion limit. A parallel algorithm that utilizes on-node parallelism and respects memory hierarchies is studied. This method scales almost linearly when using physical cores on a node and beneļ¬ts from multiple threads per core. A multicompute node algorithm for domain decomposed IMC that passes mesh data instead of particles is explored as a means to solve load balance issues. This method scales better than the particle passing method on highly scattering problems with short time steps

    Extending fragment-based free energy calculations with library Monte Carlo simulation: Annealing in interaction space

    Get PDF
    Pre-calculated libraries of molecular fragment configurations have previously been used as a basis for both equilibrium sampling (via "library-based Monte Carlo") and for obtaining absolute free energies using a polymer-growth formalism. Here, we combine the two approaches to extend the size of systems for which free energies can be calculated. We study a series of all-atom poly-alanine systems in a simple dielectric "solvent" and find that precise free energies can be obtained rapidly. For instance, for 12 residues, less than an hour of single-processor is required. The combined approach is formally equivalent to the "annealed importance sampling" algorithm; instead of annealing by decreasing temperature, however, interactions among fragments are gradually added as the molecule is "grown." We discuss implications for future binding affinity calculations in which a ligand is grown into a binding site

    Multiscale Methods for Stochastic Collocation of Mixed Finite Elements for Flow in Porous Media

    Get PDF
    This thesis contains methods for uncertainty quantification of flow in porous media through stochastic modeling. New parallel algorithms are described for both deterministic and stochastic model problems, and are shown to be computationally more efficient than existing approaches in many cases.First, we present a method that combines a mixed finite element spatial discretization with collocation in stochastic dimensions on a tensor product grid. The governing equations are based on Darcy's Law with stochastic permeability. A known covariance function is used to approximate the log permeability as a truncated Karhunen-Loeve expansion. A priori error analysis is performed and numerically verified.Second, we present a new implementation of a multiscale mortar mixed finite element method. The original algorithm uses non-overlapping domain decomposition to reformulate a fine scale problem as a coarse scale mortar interface problem. This system is then solved in parallel with an iterative method, requiring the solution to local subdomain problems on every interface iteration. Our modified implementation instead forms a Multiscale Flux Basis consisting of mortar functions that represent individual flux responses for each mortar degree of freedom, on each subdomain independently. We show this approach yields the same solution as the original method, and compare the computational workload with a balancing preconditioner.Third, we extend and combine the previous works as follows. Multiple rock types are modeled as nonstationary media with a sum of Karhunen-Loeve expansions. Very heterogeneous noise is handled via collocation on a sparse grid in high dimensions. Uncertainty quantification is parallelized by coupling a multiscale mortar mixed finite element discretization with stochastic collocation. We give three new algorithms to solve the resulting system. They use the original implementation, a deterministic Multiscale Flux Basis, and a stochastic Multiscale Flux Basis. Multiscale a priori error analysis is performed and numerically verified for single-phase flow. Fourth, we present a concurrent approach that uses the Multiscale Flux Basis as an interface preconditioner. We show the preconditioner significantly reduces the number of interface iterations, and describe how it can be used for stochastic collocation as well as two-phase flow simulations in both fully-implicit and IMPES models

    Variational approach to probabilistic finite elements

    Get PDF
    Probabilistic finite element method (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties, and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes

    Monte Carlo and Depletion Reactor Analysis for High-Performance Computing Applications

    Get PDF
    This dissertation discusses the research and development for a coupled neutron trans- port/isotopic depletion capability for use in high-preformance computing applications. Accurate neutronics modeling and simulation for \real reactor problems has been a long sought after goal in the computational community. A complementary \stretch goal to this is the ability to perform full-core depletion analysis and spent fuel isotopic characterization. This dissertation thus presents the research and development of a coupled Monte Carlo transport/isotopic depletion implementation with the Exnihilo framework geared for high-performance computing architectures to enable neutronics analysis for full-core reactor problems. An in-depth case study of the current state of Monte Carlo neutron transport with respect to source sampling, source convergence, uncertainty underprediction and biases associated with localized tallies in Monte Carlo eigenvalue calculations was performed using MCNPand KENO. This analysis is utilized in the design and development of the statistical algorithms for Exnihilo\u27s Monte Carlo framework, Shift. To this end, a methodology has been developed in order to perform tally statistics in domain decomposed environments. This methodology has been shown to produce accurate tally uncertainty estimates in domain-decomposed environments without a significant increase in the memory requirements, processor-to-processor communications, or computational biases. With the addition of parallel, domain-decomposed tally uncertainty estimation processes, a depletion package was developed for the Exnihilo code suite to utilize the depletion capabilities of the Oak Ridge Isotope GENeration code. This interface was designed to be transport agnostic, meaning that it can be used by any of the reactor analysis packages within Exnihilo such as Denovo or Shift. Extensive validation and testing of the ORIGEN interface and coupling with the Shift Monte Carlo transport code is performed within this dissertation, and results are presented for the calculated eigenvalues, material powers, and nuclide concentrations for the depleted materials. These results are then compared to ORIGEN and TRITON depletion calculations, and analysis shows that the Exnihilo transport-depletion capability is in good agreement with these codes

    XMDS2: Fast, scalable simulation of coupled stochastic partial differential equations

    Full text link
    XMDS2 is a cross-platform, GPL-licensed, open source package for numerically integrating initial value problems that range from a single ordinary differential equation up to systems of coupled stochastic partial differential equations. The equations are described in a high-level XML-based script, and the package generates low-level optionally parallelised C++ code for the efficient solution of those equations. It combines the advantages of high-level simulations, namely fast and low-error development, with the speed, portability and scalability of hand-written code. XMDS2 is a complete redesign of the XMDS package, and features support for a much wider problem space while also producing faster code.Comment: 9 pages, 5 figure
    • ā€¦
    corecore