683 research outputs found

    Energy-aware data prefetching for multi-speed disks

    Full text link

    Large prebiotic molecules in space: photo-physics of acetic acid and its isomers

    Full text link
    An increasing number of large molecules have been positively identified in space. Many of these molecules are of biological interest and thus provide insight into prebiotic organic chemistry in the protoplanetary nebula. Among these molecules, acetic acid is of particular importance due to its structural proximity to glycine, the simplest amino acid. We compute electronic and vibrational properties of acetic acid and its isomers, methyl formate and glycolaldehyde, using density functional theory. From computed photo-absorption cross-sections, we obtain the corresponding photo-absorption rates for solar radiation at 1 AU and find them in good agreement with previous estimates. We also discuss glycolaldehyde diffuse emission in Sgr B2(N), as opposite to emissions from methyl formate and acetic acid that appear to be concentrate in the compact region Sgr B2(N-LMH).Comment: 8 pages, 5 figure

    Memory and compiler optimizations for low-power and -energy.

    Get PDF
    ICOOOLPS'2006 was co-located with the 20th European Conference on Object-Oriented Programming (ECOOP'2006).International audienceEmbedded systems become more and more widespread, especially autonomous ones, and clearly tend to be ubiquitous. In such systems, low-power and low-energy usage get ever more crucial. Furthermore, these issues also become paramount in (massively) multi-processors systems, either in one machine or more widely in a grid. The various problems faced pertain to autonomy, power supply possibilities, thermal dissipation, or even sheer energy cost. Although it has since long been studied in harware, energy optimization is more recent in software. In this paper, we thus aim at raising awareness to low-power and low-energy issues in the language and compilation community. We thus broadly but briefly survey techniques and solutions to this energy issue, focusing on a few specific aspects in the context of compiler optimizations and memory management

    FIRE-2 Simulations: Physics versus Numerics in Galaxy Formation

    Get PDF
    The Feedback In Realistic Environments (FIRE) project explores feedback in cosmological galaxy formation simulations. Previous FIRE simulations used an identical source code (“FIRE-1”) for consistency. Motivated by the development of more accurate numerics – including hydrodynamic solvers, gravitational softening, and supernova coupling algorithms – and exploration of new physics (e.g. magnetic fields), we introduce “FIRE-2”, an updated numerical implementation of FIRE physics for the GIZMO code. We run a suite of simulations and compare against FIRE-1: overall, FIRE-2 improvements do not qualitatively change galaxy-scale properties. We pursue an extensive study of numerics versus physics. Details of the star-formation algorithm, cooling physics, and chemistry have weak effects, provided that we include metal-line cooling and star formation occurs at higher-than-mean densities. We present new resolution criteria for high-resolution galaxy simulations. Most galaxy-scale properties are robust to numerics we test, provided: (1) Toomre masses are resolved; (2) feedback coupling ensures conservation, and (3) individual supernovae are time-resolved. Stellar masses and profiles are most robust to resolution, followed by metal abundances and morphologies, followed by properties of winds and circum-galactic media (CGM). Central (∼kpc) mass concentrations in massive (>L*) galaxies are sensitive to numerics (via trapping/recycling of winds in hot halos). Multiple feedback mechanisms play key roles: supernovae regulate stellar masses/winds; stellar mass-loss fuels late star formation; radiative feedback suppresses accretion onto dwarfs and instantaneous star formation in disks. We provide all initial conditions and numerical algorithms used

    Status and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond

    Full text link
    In this and a set of companion whitepapers, the USQCD Collaboration lays out a program of science and computing for lattice gauge theory. These whitepapers describe how calculation using lattice QCD (and other gauge theories) can aid the interpretation of ongoing and upcoming experiments in particle and nuclear physics, as well as inspire new ones.Comment: 44 pages. 1 of USQCD whitepapers

    CoolCloud: improving energy efficiency in virtualized data centers

    Get PDF
    In recent years, cloud computing services continue to grow and has become more pervasive and indispensable in people\u27s lives. The energy consumption continues to rise as more and more data centers are being built. How to provide a more energy efficient data center infrastructure that can support today\u27s cloud computing services has become one of the most important issues in the field of cloud computing research. In this thesis, we mainly tackle three research problems: 1. how to achieve energy savings in a virtualized data center environment; 2. how to maintain service level agreements; 3. how to make our design practical for actual implementation in enterprise data centers. Combining all the studies above, we propose an optimization framework named CoolCloud to minimize energy consumption in virtualized data centers with the service level agreement taken into consideration. The proposed framework minimizes energy at two different layers: (1) minimize local server energy using dynamic voltage \& frequency scaling (DVFS) exploiting runtime program phases. (2) minimize global cluster energy using dynamic mapping between virtual machines (VMs) and servers based on each VM\u27s resource requirement. Such optimization leads to the most economical way to operate an enterprise data center. On each local server, we develop a voltage and frequency scheduler that can provide CPU energy savings under applications\u27 or virtual machines\u27 specified SLA requirements by exploiting applications\u27 run-time program phases. At the cluster level, we propose a practical solution for managing the mappings of VMs to physical servers. This framework solves the problem of finding the most energy efficient way (least resource wastage and least power consumption) of placing the VMs considering their resource requirements

    Local time stepping on high performance computing architectures: mitigating CFL bottlenecks for large-scale wave propagation

    Get PDF
    Modeling problems that require the simulation of hyperbolic PDEs (wave equations) on large heterogeneous domains have potentially many bottlenecks. We attack this problem through two techniques: the massively parallel capabilities of graphics processors (GPUs) and local time stepping (LTS) to mitigate any CFL bottlenecks on a multiscale mesh. Many modern supercomputing centers are installing GPUs due to their high performance, and extending existing seismic wave-propagation software to use GPUs is vitally important to give application scientists the highest possible performance. In addition to this architectural optimization, LTS schemes avoid performance losses in meshes with localized areas of refinement. Coupled with the GPU performance optimizations, the derivation and implementation of an Newmark LTS scheme enables next-generation performance for real-world applications. Included in this implementation is work addressing the load-balancing problem inherent to multi-level LTS schemes, enabling scalability to hundreds and thousands of CPUs and GPUs. These GPU, LTS, and scaling optimizations accelerate the performance of existing applications by a factor of 30 or more, and enable future modeling scenarios previously made unfeasible by the cost of standard explicit time-stepping schemes

    Parity violation in polarized cold neutron capture

    Get PDF
    The longitudinal asymmetry in photons emitted during radiative neutron-proton capture depends cleanly on the neutral current contribution to the weak nucleonnucleon interaction. The NPDGamma experiment is an eort to measure this asymmetry with precision ten parts per billion, which is 10% of its range of predicted values. In 2006 the NPDGamma collaboration acquired its rst production dataset at the Los Alamos Neutron Science Center. A pulsed beam of polarized slow neutrons is incident on a 16 L parahydrogen target; capture photons are observed in current mode in a cylindrical array of CsI scintillators. In this initial experiment, roughly 730 hours running with 50-55% neutron polarization, we set a new upper limit of 210 parts per billion for the size of the NPDGamma asymmetry, a modest improvement over the existing limit. In the next stage of the experiment this limit will be greatly reduced with the increased neutron ux at the Spallation Neutron Source
    corecore