20,688 research outputs found
Recommended from our members
Preparing sparse solvers for exascale computing.
Sparse solvers provide essential functionality for a wide variety of scientific applications. Highly parallel sparse solvers are essential for continuing advances in high-fidelity, multi-physics and multi-scale simulations, especially as we target exascale platforms. This paper describes the challenges, strategies and progress of the US Department of Energy Exascale Computing project towards providing sparse solvers for exascale computing platforms. We address the demands of systems with thousands of high-performance node devices where exposing concurrency, hiding latency and creating alternative algorithms become essential. The efforts described here are works in progress, highlighting current success and upcoming challenges. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'
Progress in the Next Linear Collider Design
An electron/positron linear collider with a center-of-mass energy between 0.5
and 1 TeV would be an important complement to the physics program of the LHC in
the next decade. The Next Linear Collider (NLC) is being designed by a US
collaboration (FNAL, LBNL, LLNL, and SLAC) which is working closely with the
Japanese collaboration that is designing the Japanese Linear Collider (JLC).
The NLC main linacs are based on normal conducting 11 GHz rf. This paper will
discuss the technical difficulties encountered as well as the many changes that
have been made to the NLC design over the last year. These changes include
improvements to the X-band rf system as well as modifications to the injector
and the beam delivery system. They are based on new conceptual solutions as
well as results from the R&D programs which have exceeded initial
specifications. The net effect has been to reduce the length of the collider
from about 32 km to 25 km and to reduce the number of klystrons and modulators
by a factor of two. Together these lead to significant cost savings
On Designing Multicore-aware Simulators for Biological Systems
The stochastic simulation of biological systems is an increasingly popular
technique in bioinformatics. It often is an enlightening technique, which may
however result in being computational expensive. We discuss the main
opportunities to speed it up on multi-core platforms, which pose new challenges
for parallelisation techniques. These opportunities are developed in two
general families of solutions involving both the single simulation and a bulk
of independent simulations (either replicas of derived from parameter sweep).
Proposed solutions are tested on the parallelisation of the CWC simulator
(Calculus of Wrapped Compartments) that is carried out according to proposed
solutions by way of the FastFlow programming framework making possible fast
development and efficient execution on multi-cores.Comment: 19 pages + cover pag
Prototype of Fault Adaptive Embedded Software for Large-Scale Real-Time Systems
This paper describes a comprehensive prototype of large-scale fault adaptive
embedded software developed for the proposed Fermilab BTeV high energy physics
experiment. Lightweight self-optimizing agents embedded within Level 1 of the
prototype are responsible for proactive and reactive monitoring and mitigation
based on specified layers of competence. The agents are self-protecting,
detecting cascading failures using a distributed approach. Adaptive,
reconfigurable, and mobile objects for reliablility are designed to be
self-configuring to adapt automatically to dynamically changing environments.
These objects provide a self-healing layer with the ability to discover,
diagnose, and react to discontinuities in real-time processing. A generic
modeling environment was developed to facilitate design and implementation of
hardware resource specifications, application data flow, and failure mitigation
strategies. Level 1 of the planned BTeV trigger system alone will consist of
2500 DSPs, so the number of components and intractable fault scenarios involved
make it impossible to design an `expert system' that applies traditional
centralized mitigative strategies based on rules capturing every possible
system state. Instead, a distributed reactive approach is implemented using the
tools and methodologies developed by the Real-Time Embedded Systems group.Comment: 2nd Workshop on Engineering of Autonomic Systems (EASe), in the 12th
Annual IEEE International Conference and Workshop on the Engineering of
Computer Based Systems (ECBS), Washington, DC, April, 200
Beam Cleaning and Collimation Systems
Collimation systems in particle accelerators are designed to dispose of
unavoidable losses safely and efficiently during beam operation. Different
roles are required for different types of accelerator. The present state of the
art in beam collimation is exemplified in high-intensity, high-energy
superconducting hadron colliders, like the CERN Large Hadron Collider (LHC),
where stored beam energies reach levels up to several orders of magnitude
higher than the tiny energies required to quench cold magnets. Collimation
systems are essential systems for the daily operation of these modern machines.
In this document, the design of a multistage collimation system is reviewed,
taking the LHC as an example case study. In this case, unprecedented cleaning
performance has been achieved, together with a system complexity comparable to
no other accelerator. Aspects related to collimator design and operational
challenges of large collimation systems are also addressed.Comment: 35 pages, contribution to the 2014 Joint International Accelerator
School: Beam Loss and Accelerator Protection, Newport Beach, CA, USA , 5-14
Nov 201
- …