28,323 research outputs found
A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing
Data Grids have been adopted as the platform for scientific communities that
need to share, access, transport, process and manage large data collections
distributed worldwide. They combine high-end computing technologies with
high-performance networking and wide-area storage management techniques. In
this paper, we discuss the key concepts behind Data Grids and compare them with
other data sharing and distribution paradigms such as content delivery
networks, peer-to-peer networks and distributed databases. We then provide
comprehensive taxonomies that cover various aspects of architecture, data
transportation, data replication and resource allocation and scheduling.
Finally, we map the proposed taxonomy to various Data Grid systems not only to
validate the taxonomy but also to identify areas for future exploration.
Through this taxonomy, we aim to categorise existing systems to better
understand their goals and their methodology. This would help evaluate their
applicability for solving similar problems. This taxonomy also provides a "gap
analysis" of this area through which researchers can potentially identify new
issues for investigation. Finally, we hope that the proposed taxonomy and
mapping also helps to provide an easy way for new practitioners to understand
this complex area of research.Comment: 46 pages, 16 figures, Technical Repor
ASCR/HEP Exascale Requirements Review Report
This draft report summarizes and details the findings, results, and
recommendations derived from the ASCR/HEP Exascale Requirements Review meeting
held in June, 2015. The main conclusions are as follows. 1) Larger, more
capable computing and data facilities are needed to support HEP science goals
in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of
the demand at the 2025 timescale is at least two orders of magnitude -- and in
some cases greater -- than that available currently. 2) The growth rate of data
produced by simulations is overwhelming the current ability, of both facilities
and researchers, to store and analyze it. Additional resources and new
techniques for data analysis are urgently needed. 3) Data rates and volumes
from HEP experimental facilities are also straining the ability to store and
analyze large and complex data volumes. Appropriately configured
leadership-class facilities can play a transformational role in enabling
scientific discovery from these datasets. 4) A close integration of HPC
simulation and data analysis will aid greatly in interpreting results from HEP
experiments. Such an integration will minimize data movement and facilitate
interdependent workflows. 5) Long-range planning between HEP and ASCR will be
required to meet HEP's research needs. To best use ASCR HPC resources the
experimental HEP program needs a) an established long-term plan for access to
ASCR computational and data resources, b) an ability to map workflows onto HPC
resources, c) the ability for ASCR facilities to accommodate workflows run by
collaborations that can have thousands of individual members, d) to transition
codes to the next-generation HPC platforms that will be available at ASCR
facilities, e) to build up and train a workforce capable of developing and
using simulations and analysis to support HEP scientific research on
next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio
Digital Preservation Services : State of the Art Analysis
Research report funded by the DC-NET project.An overview of the state of the art in service provision for digital preservation and curation. Its focus is on the areas where bridging the gaps is needed between e-Infrastructures and efficient and forward-looking digital preservation services. Based on a desktop study and a rapid analysis of some 190 currently available tools and services for digital preservation, the deliverable provides a high-level view on the range of instruments currently on offer to support various functions within a preservation system.European Commission, FP7peer-reviewe
Smart grid architecture for rural distribution networks: application to a Spanish pilot network
This paper presents a novel architecture for rural distribution grids. This architecture is designed to modernize traditional rural networks into new Smart Grid ones. The architecture tackles innovation actions on both the power plane and the management plane of the system. In the power plane, the architecture focuses on exploiting the synergies between telecommunications and innovative technologies based on power electronics managing low scale electrical storage. In the management plane, a decentralized management system is proposed based on the addition of two new agents assisting the typical Supervisory Control And Data Acquisition (SCADA) system of distribution system operators. Altogether, the proposed architecture enables operators to use more effectively—in an automated and decentralized way—weak rural distribution systems, increasing the capability to integrate new distributed energy resources. This architecture is being implemented in a real Pilot Network located in Spain, in the frame of the European Smart Rural Grid project. The paper also includes a study case showing one of the potentialities of one of the principal technologies developed in the project and underpinning the realization of the new architecture: the so-called Intelligent Distribution Power Router.Postprint (published version
Plasma Edge Kinetic-MHD Modeling in Tokamaks Using Kepler Workflow for Code Coupling, Data Management and Visualization
A new predictive computer simulation tool targeting the development of the H-mode pedestal at the plasma edge in tokamaks and the triggering and dynamics of edge localized modes (ELMs) is presented in this report. This tool brings together, in a coordinated and effective manner, several first-principles physics simulation codes, stability analysis packages, and data processing and visualization tools. A Kepler workflow is used in order to carry out an edge plasma simulation that loosely couples the kinetic code, XGC0, with an ideal MHD linear stability analysis code, ELITE, and an extended MHD initial value code such as M3D or NIMROD. XGC0 includes the neoclassical ion-electron-neutral dynamics needed to simulate pedestal growth near the separatrix. The Kepler workflow processes the XGC0 simulation results into simple images that can be selected and displayed via the Dashboard, a monitoring tool implemented in AJAX allowing the scientist to track computational resources, examine running and archived jobs, and view key physics data, all within a standard Web browser. The XGC0 simulation is monitored for the conditions needed to trigger an ELM crash by periodically assessing the edge plasma pressure and current density profiles using the ELITE code. If an ELM crash is triggered, the Kepler workflow launches the M3D code on a moderate-size Opteron cluster to simulate the nonlinear ELM crash and to compute the relaxation of plasma profiles after the crash. This process is monitored through periodic outputs of plasma fluid quantities that are automatically visualized with AVS/Express and may be displayed on the Dashboard. Finally, the Kepler workflow archives all data outputs and processed images using HPSS, as well as provenance information about the software and hardware used to create the simulation. The complete process of preparing, executing and monitoring a coupled-code simulation of the edge pressure pedestal buildup and the ELM cycle using the Kepler scientific workflow system is described in this paper
Strong scaling of general-purpose molecular dynamics simulations on GPUs
We describe a highly optimized implementation of MPI domain decomposition in
a GPU-enabled, general-purpose molecular dynamics code, HOOMD-blue (Anderson
and Glotzer, arXiv:1308.5587). Our approach is inspired by a traditional
CPU-based code, LAMMPS (Plimpton, J. Comp. Phys. 117, 1995), but is implemented
within a code that was designed for execution on GPUs from the start (Anderson
et al., J. Comp. Phys. 227, 2008). The software supports short-ranged pair
force and bond force fields and achieves optimal GPU performance using an
autotuning algorithm. We are able to demonstrate equivalent or superior scaling
on up to 3,375 GPUs in Lennard-Jones and dissipative particle dynamics (DPD)
simulations of up to 108 million particles. GPUDirect RDMA capabilities in
recent GPU generations provide better performance in full double precision
calculations. For a representative polymer physics application, HOOMD-blue 1.0
provides an effective GPU vs. CPU node speed-up of 12.5x.Comment: 30 pages, 14 figure
Beam Dynamics in High Intensity Cyclotrons Including Neighboring Bunch Effects: Model, Implementation and Application
Space charge effects, being one of the most significant collective effects,
play an important role in high intensity cyclotrons. However, for cyclotrons
with small turn separation, other existing effects are of equal importance.
Interactions of radially neighboring bunches are also present, but their
combined effects has not yet been investigated in any great detail. In this
paper, a new particle in cell based self-consistent numerical simulation model
is presented for the first time. The model covers neighboring bunch effects and
is implemented in the three-dimensional object-oriented parallel code
OPAL-cycl, a flavor of the OPAL framework. We discuss this model together with
its implementation and validation. Simulation results are presented from the
PSI 590 MeV Ring Cyclotron in the context of the ongoing high intensity upgrade
program, which aims to provide a beam power of 1.8 MW (CW) at the target
destination
Recommended from our members
High-Performance Integrated Window and Façade Solutions for California
The researchers developed a new generation of high-performance façade systems and supporting design and management tools to support industry in meeting California’s greenhouse gas reduction targets, reduce energy consumption, and enable an adaptable response to minimize real-time demands on the electricity grid. The project resulted in five outcomes: (1) The research team developed an R-5, 1-inch thick, triplepane, insulating glass unit with a novel low-conductance aluminum frame. This technology can help significantly reduce residential cooling and heating loads, particularly during the evening. (2) The team developed a prototype of a windowintegrated local ventilation and energy recovery device that provides clean, dry fresh air through the façade with minimal energy requirements. (3) A daylight-redirecting louver system was prototyped to redirect sunlight 15–40 feet from the window. Simulations estimated that lighting energy use could be reduced by 35–54 percent without glare. (4) A control system incorporating physics-based equations and a mathematical solver was prototyped and field tested to demonstrate feasibility. Simulations estimated that total electricity costs could be reduced by 9-28 percent on sunny summer days through adaptive control of operable shading and daylighting components and the thermostat compared to state-of-the-art automatic façade controls in commercial building perimeter zones. (5) Supporting models and tools needed by industry for technology R&D and market transformation activities were validated. Attaining California’s clean energy goals require making a fundamental shift from today’s ad-hoc assemblages of static components to turnkey, intelligent, responsive, integrated building façade systems. These systems offered significant reductions in energy use, peak demand, and operating cost in California
- …