1,227 research outputs found

    Tensor Numerical Methods in Quantum Chemistry: from Hartree-Fock Energy to Excited States

    Get PDF
    We resume the recent successes of the grid-based tensor numerical methods and discuss their prospects in real-space electronic structure calculations. These methods, based on the low-rank representation of the multidimensional functions and integral operators, led to entirely grid-based tensor-structured 3D Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core Hamiltonian and two-electron integrals (TEI) in O(nlogn)O(n\log n) complexity using the rank-structured approximation of basis functions, electron densities and convolution integral operators all represented on 3D n×n×nn\times n\times n Cartesian grids. The algorithm for calculating TEI tensor in a form of the Cholesky decomposition is based on multiple factorizations using algebraic 1D ``density fitting`` scheme. The basis functions are not restricted to separable Gaussians, since the analytical integration is substituted by high-precision tensor-structured numerical quadratures. The tensor approaches to post-Hartree-Fock calculations for the MP2 energy correction and for the Bethe-Salpeter excited states, based on using low-rank factorizations and the reduced basis method, were recently introduced. Another direction is related to the recent attempts to develop a tensor-based Hartree-Fock numerical scheme for finite lattice-structured systems, where one of the numerical challenges is the summation of electrostatic potentials of a large number of nuclei. The 3D grid-based tensor method for calculation of a potential sum on a L×L×LL\times L\times L lattice manifests the linear in LL computational work, O(L)O(L), instead of the usual O(L3logL)O(L^3 \log L) scaling by the Ewald-type approaches

    When double rounding is odd

    Get PDF
    International audienceMany general purpose processors (including Intel's) may not always produce the correctly rounded result of a floating-point operation due to double rounding. Instead of rounding the value to the working precision, the value is first rounded in an intermediate extended precision and then rounded in the working precision; this often means a loss of accuracy. We suggest the use of rounding to odd as the first rounding in order to regain this accuracy: we prove that the double rounding then gives the correct rounding to the nearest value. To increase the trust on this result, as this rounding is unusual and this property is surprising, we formally proved this property using the Coq automatic proof checker

    Using Java for distributed computing in the Gaia satellite data processing

    Get PDF
    In recent years Java has matured to a stable easy-to-use language with the flexibility of an interpreter (for reflection etc.) but the performance and type checking of a compiled language. When we started using Java for astronomical applications around 1999 they were the first of their kind in astronomy. Now a great deal of astronomy software is written in Java as are many business applications. We discuss the current environment and trends concerning the language and present an actual example of scientific use of Java for high-performance distributed computing: ESA's mission Gaia. The Gaia scanning satellite will perform a galactic census of about 1000 million objects in our galaxy. The Gaia community has chosen to write its processing software in Java. We explore the manifold reasons for choosing Java for this large science collaboration. Gaia processing is numerically complex but highly distributable, some parts being embarrassingly parallel. We describe the Gaia processing architecture and its realisation in Java. We delve into the astrometric solution which is the most advanced and most complex part of the processing. The Gaia simulator is also written in Java and is the most mature code in the system. This has been successfully running since about 2005 on the supercomputer "Marenostrum" in Barcelona. We relate experiences of using Java on a large shared machine. Finally we discuss Java, including some of its problems, for scientific computing.Comment: Experimental Astronomy, August 201

    Next generation of guiding questions for basic turbulent combustion research

    Full text link
    A two-day workshop was held to identify and compile research questions and needs to advance basic turbulent combustion research towards capabilities that allow predictive simulations at the design level for practical devices. Recognizing the state-of-the-art simulation capabilities and inherent limitations with computational resources the focus is on Large Eddy Simulations as a pathway to this goal. This report documents not only scientific and technical questions related to shortcomings in our current understanding of turbulent combustion, but also addresses procedural challenges. Key bottlenecks and research needs are addressed and described but the report also emphasizes that the conduct of research has to adapt to the complex nature of turbulent combustion by fostering collaborations and long-term funding horizons.This material is based upon work supported by the National Science Foundation under Grant Number 1438956. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundationhttp://deepblue.lib.umich.edu/bitstream/2027.42/108583/1/Next gen combustion research-report-20140729.pdfDescription of Next gen combustion research-report-20140729.pdf : Workshop repor

    Asynchronous and Multiprecision Linear Solvers - Scalable and Fault-Tolerant Numerics for Energy Efficient High Performance Computing

    Get PDF
    Asynchronous methods minimize idle times by removing synchronization barriers, and therefore allow the efficient usage of computer systems. The implied high tolerance with respect to communication latencies improves the fault tolerance. As asynchronous methods also enable the usage of the power and energy saving mechanisms provided by the hardware, they are suitable candidates for the highly parallel and heterogeneous hardware platforms that are expected for the near future

    Modules for Experiments in Stellar Astrophysics (MESA)

    Full text link
    Stellar physics and evolution calculations enable a broad range of research in astrophysics. Modules for Experiments in Stellar Astrophysics (MESA) is a suite of open source libraries for a wide range of applications in computational stellar astrophysics. A newly designed 1-D stellar evolution module, MESA star, combines many of the numerical and physics modules for simulations of a wide range of stellar evolution scenarios ranging from very-low mass to massive stars, including advanced evolutionary phases. MESA star solves the fully coupled structure and composition equations simultaneously. It uses adaptive mesh refinement and sophisticated timestep controls, and supports shared memory parallelism based on OpenMP. Independently usable modules provide equation of state, opacity, nuclear reaction rates, and atmosphere boundary conditions. Each module is constructed as a separate Fortran 95 library with its own public interface. Examples include comparisons to other codes and show evolutionary tracks of very low mass stars, brown dwarfs, and gas giant planets; the complete evolution of a 1 Msun star from the pre-main sequence to a cooling white dwarf; the Solar sound speed profile; the evolution of intermediate mass stars through the thermal pulses on the He-shell burning AGB phase; the interior structure of slowly pulsating B Stars and Beta Cepheids; evolutionary tracks of massive stars from the pre-main sequence to the onset of core collapse; stars undergoing Roche lobe overflow; and accretion onto a neutron star. Instructions for downloading and installing MESA can be found on the project web site (http://mesa.sourceforge.net/).Comment: 110 pages, 39 figures; submitted to ApJS; visit the MESA website at http://mesa.sourceforge.ne
    corecore