106,981 research outputs found

    Optimizing the Distributed Hydrology Soil Vegetation Model for Uncertainty Assessment With Serial, Multicore and Distributed Accelerations

    Get PDF
    Hydrology is the study of water. Hydrology tracks various attributes of water such as its quality and movement. As a tool Hydrology allows researchers to investigate topics such as the impacts of wildfires, logging, and commercial development. With perfect and complete data collection researchers could answer these questions with complete certainty. However, due to cost and potential sources of error this is impractical. As such researchers rely on simulations. The Distributed Hydrology Soil Vegetation Model(also referenced to as DHSVM) is a scientific mathematical model to numerically represent watersheds. Hydrology, as with all fields, continues to produce large amounts of data from researchers. As the stores of data increase the scientific models that process them require occasional improvements to better handle processing the masses of information. This paper investigates DHSVM as a serial C program. The paper implements and analyzes various high performance computing advancements to the original code base. Specifically this paper investigates compiler optimization, implementing par- allel computing with OpenMP, and adding distributed computing with OpenMPI. DHSVM was also tuned to run many instances on California Polytechnic State Uni- visity, San Luis Obispo’s high performance computer cluster. These additions to DHSVM help speed-up the results returned to researches, and improves DHSVM’s ability to be used with uncertainty analysis methods. iv This paper was able to improve the performance of DHSVM 2 times with serial and compiler optimization. In addition to the serial and compiler optimizations this paper found that OpenMP provided a noticeable speed up on hardware, that also scaled as the hardware improved. The pareallel optimization doubled DHSVM’s speed again on commodity hardware. Finally it was found that OpenMPI was best used for running multiple instances of DHSVM. All combined this paper was able to improve the performance of DHSVM by 4.4 times per instance, and allow it to run multiple instances on computing clusters

    The Past, Present and Future of High Performance Computing

    Get PDF
    In this overview paper we start by looking at the birth of what is called ``High Performance Computing\u27\u27 today. It all began over 30 years ago when the Cray 1 and CDC Cyber 205 ``supercomputers\u27\u27 were introduced. This had a huge impact on scientific computing. A very turbulent time at both the hardware and software level was to follow. Eventually the situation stabilized, but not for long. Today, there are two different trends in hardware architectures and have created a bifurcation in the market. On one hand the GPGPU quickly found a place in the marketplace, but is still the domain of the expert. In contrast to this, multicore processors make hardware parallelism available to the masses. Each have their own set of issues to deal with. In the last section we make an attempt to look into the future, but this is of course a highly personal opinion

    Compilation of Abstracts for SC12 Conference Proceedings

    Get PDF
    1 A Breakthrough in Rotorcraft Prediction Accuracy Using Detached Eddy Simulation; 2 Adjoint-Based Design for Complex Aerospace Configurations; 3 Simulating Hypersonic Turbulent Combustion for Future Aircraft; 4 From a Roar to a Whisper: Making Modern Aircraft Quieter; 5 Modeling of Extended Formation Flight on High-Performance Computers; 6 Supersonic Retropropulsion for Mars Entry; 7 Validating Water Spray Simulation Models for the SLS Launch Environment; 8 Simulating Moving Valves for Space Launch System Liquid Engines; 9 Innovative Simulations for Modeling the SLS Solid Rocket Booster Ignition; 10 Solid Rocket Booster Ignition Overpressure Simulations for the Space Launch System; 11 CFD Simulations to Support the Next Generation of Launch Pads; 12 Modeling and Simulation Support for NASA's Next-Generation Space Launch System; 13 Simulating Planetary Entry Environments for Space Exploration Vehicles; 14 NASA Center for Climate Simulation Highlights; 15 Ultrascale Climate Data Visualization and Analysis; 16 NASA Climate Simulations and Observations for the IPCC and Beyond; 17 Next-Generation Climate Data Services: MERRA Analytics; 18 Recent Advances in High-Resolution Global Atmospheric Modeling; 19 Causes and Consequences of Turbulence in the Earths Protective Shield; 20 NASA Earth Exchange (NEX): A Collaborative Supercomputing Platform; 21 Powering Deep Space Missions: Thermoelectric Properties of Complex Materials; 22 Meeting NASA's High-End Computing Goals Through Innovation; 23 Continuous Enhancements to the Pleiades Supercomputer for Maximum Uptime; 24 Live Demonstrations of 100-Gbps File Transfers Across LANs and WANs; 25 Untangling the Computing Landscape for Climate Simulations; 26 Simulating Galaxies and the Universe; 27 The Mysterious Origin of Stellar Masses; 28 Hot-Plasma Geysers on the Sun; 29 Turbulent Life of Kepler Stars; 30 Modeling Weather on the Sun; 31 Weather on Mars: The Meteorology of Gale Crater; 32 Enhancing Performance of NASAs High-End Computing Applications; 33 Designing Curiosity's Perfect Landing on Mars; 34 The Search Continues: Kepler's Quest for Habitable Earth-Sized Planets

    The correlation between halo mass and stellar mass for the most massive galaxies in the universe

    Get PDF
    I.Z. is supported by NSF grant AST-1612085. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah.We present measurements of the clustering of galaxies as a function of their stellar mass in the Baryon Oscillation Spectroscopic Survey. We compare the clustering of samples using 12 different methods for estimating stellar mass, isolating the method that has the smallest scatter at fixed halo mass. In this test, the stellar mass estimate with the smallest errors yields the highest amplitude of clustering at fixed number density. We find that the PCA stellar masses of Chen et al. clearly have the tightest correlation with halo mass. The PCA masses use the full galaxy spectrum, differentiating them from other estimates that only use optical photometric information. Using the PCA masses, we measure the large-scale bias as a function of M∗ for galaxies with log M∗ ≥ 11.4, correcting for incompleteness at the low-mass end of our measurements. Using the abundance matching ansatz to connect dark matter halo mass to stellar mass, we construct theoretical models of b (M∗) that match the same stellar mass function but have different amounts of scatter in stellar mass at fixed halo mass, σlog M∗. Using this approach, we find σlogM∗ =  0.18 -0.02 +0.01. This value includes both intrinsic scatter as well as random errors in the stellar masses. To partially remove the latter, we use repeated spectra to estimate statistical errors on the stellar masses, yielding an upper limit to the intrinsic scatter of 0.16 dex.Publisher PDFPeer reviewe

    A Linux PC cluster for lattice QCD with exact chiral symmetry

    Full text link
    A computational system for lattice QCD with exact chiral symmetry is described. The platform is a home-made Linux PC cluster, built with off-the-shelf components. At present this system constitutes of 64 nodes, with each node consisting of one Pentium 4 processor (1.6/2.0/2.5 GHz), one Gbyte of PC800/PC1066 RDRAM, one 40/80/120 Gbyte hard disk, and a network card. The computationally intensive parts of our program are written in SSE2 codes. The speed of this system is estimated to be 70 Gflops, and its price/performance is better than $1.0/Mflops for 64-bit (double precision) computations in quenched QCD. We discuss how to optimize its hardware and software for computing quark propagators via the overlap Dirac operator.Comment: 24 pages, LaTeX, 2 eps figures, v2:a note and references added, the version published in Int. J. Mod. Phys.

    Status and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond

    Full text link
    In this and a set of companion whitepapers, the USQCD Collaboration lays out a program of science and computing for lattice gauge theory. These whitepapers describe how calculation using lattice QCD (and other gauge theories) can aid the interpretation of ongoing and upcoming experiments in particle and nuclear physics, as well as inspire new ones.Comment: 44 pages. 1 of USQCD whitepapers

    Lattice QCD Thermodynamics on the Grid

    Full text link
    We describe how we have used simultaneously O(103){\cal O}(10^3) nodes of the EGEE Grid, accumulating ca. 300 CPU-years in 2-3 months, to determine an important property of Quantum Chromodynamics. We explain how Grid resources were exploited efficiently and with ease, using user-level overlay based on Ganga and DIANE tools above standard Grid software stack. Application-specific scheduling and resource selection based on simple but powerful heuristics allowed to improve efficiency of the processing to obtain desired scientific results by a specified deadline. This is also a demonstration of combined use of supercomputers, to calculate the initial state of the QCD system, and Grids, to perform the subsequent massively distributed simulations. The QCD simulation was performed on a 163×416^3\times 4 lattice. Keeping the strange quark mass at its physical value, we reduced the masses of the up and down quarks until, under an increase of temperature, the system underwent a second-order phase transition to a quark-gluon plasma. Then we measured the response of this system to an increase in the quark density. We find that the transition is smoothened rather than sharpened. If confirmed on a finer lattice, this finding makes it unlikely for ongoing experimental searches to find a QCD critical point at small chemical potential
    • …
    corecore