1,142 research outputs found
ASCR/HEP Exascale Requirements Review Report
This draft report summarizes and details the findings, results, and
recommendations derived from the ASCR/HEP Exascale Requirements Review meeting
held in June, 2015. The main conclusions are as follows. 1) Larger, more
capable computing and data facilities are needed to support HEP science goals
in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of
the demand at the 2025 timescale is at least two orders of magnitude -- and in
some cases greater -- than that available currently. 2) The growth rate of data
produced by simulations is overwhelming the current ability, of both facilities
and researchers, to store and analyze it. Additional resources and new
techniques for data analysis are urgently needed. 3) Data rates and volumes
from HEP experimental facilities are also straining the ability to store and
analyze large and complex data volumes. Appropriately configured
leadership-class facilities can play a transformational role in enabling
scientific discovery from these datasets. 4) A close integration of HPC
simulation and data analysis will aid greatly in interpreting results from HEP
experiments. Such an integration will minimize data movement and facilitate
interdependent workflows. 5) Long-range planning between HEP and ASCR will be
required to meet HEP's research needs. To best use ASCR HPC resources the
experimental HEP program needs a) an established long-term plan for access to
ASCR computational and data resources, b) an ability to map workflows onto HPC
resources, c) the ability for ASCR facilities to accommodate workflows run by
collaborations that can have thousands of individual members, d) to transition
codes to the next-generation HPC platforms that will be available at ASCR
facilities, e) to build up and train a workforce capable of developing and
using simulations and analysis to support HEP scientific research on
next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio
Intermediate Palomar Transient Factory: Realtime Image Subtraction Pipeline
A fast-turnaround pipeline for realtime data reduction plays an essential
role in discovering and permitting follow-up observations to young supernovae
and fast-evolving transients in modern time-domain surveys. In this paper, we
present the realtime image subtraction pipeline in the intermediate Palomar
Transient Factory. By using high-performance computing, efficient database, and
machine learning algorithms, this pipeline manages to reliably deliver
transient candidates within ten minutes of images being taken. Our experience
in using high performance computing resources to process big data in astronomy
serves as a trailblazer to dealing with data from large-scale time-domain
facilities in near future.Comment: 18 pages, 6 figures, accepted for publication in PAS
Petascale computations for Large-scale Atomic and Molecular collisions
Petaflop architectures are currently being utilized efficiently to perform
large scale computations in Atomic, Molecular and Optical Collisions. We solve
the Schroedinger or Dirac equation for the appropriate collision problem using
the R-matrix or R-matrix with pseudo-states approach. We briefly outline the
parallel methodology used and implemented for the current suite of Breit-Pauli
and DARC codes. Various examples are shown of our theoretical results compared
with those obtained from Synchrotron Radiation facilities and from Satellite
observations. We also indicate future directions and implementation of the
R-matrix codes on emerging GPU architectures.Comment: 14 pages, 5 figures, 3 tables, Chapter in: Workshop on Sustained
Simulated Performance 2013, Published by Springer, 2014, edited by Michael
Resch, Yevgeniya Kovalenko, Eric Focht, Wolfgang Bez and Hiroaki Kobaysah
- …