92 research outputs found
Recommended from our members
Cray X1 Evaluation Status Report
On August 15, 2002 the Department of Energy (DOE) selected the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL) to deploy a new scalable vector supercomputer architecture for solving important scientific problems in climate, fusion, biology, nanoscale materials and astrophysics. ''This program is one of the first steps in an initiative designed to provide U.S. scientists with the computational power that is essential to 21st century scientific leadership,'' said Dr. Raymond L. Orbach, director of the department's Office of Science The Cray X1 is an attempt to incorporate the best aspects of previous Cray vector systems and massively-parallel-processing (MPP) systems into one design. Like the Cray T90, the X1 has high memory bandwidth, which is key to realizing a high percentage of theoretical peak performance. Like the Cray T3E, the X1 has a high-bandwidth, low-latency, scalable interconnect, and scalable system software. And, like the Cray SV1, the X1 leverages commodity off-the-shelf (CMOS) technology and incorporates non-traditional vector concepts, like vector caches and multi-streaming processors. In FY03, CCS procured a 256-processor Cray X1 to evaluate the processors, memory subsystem, scalability of the architecture, software environment and to predict the expected sustained performance on key DOE applications codes. The results of the micro-benchmarks and kernel benchmarks show the architecture of the Cray X1 to be exceptionally fast for most operations. The best results are shown on large problems, where it is not possible to fit the entire problem into the cache of the processors. These large problems are exactly the types of problems that are important for the DOE and ultra-scale simulation
Two-loop corrections to the decay rate of parapositronium
Order corrections to the decay rate of parapositronium are
calculated. A QED scattering calculation of the amplitude for electron-positron
annihilation into two photons at threshold is combined with the technique of
effective field theory to determine an NRQED Hamiltonian, which is then used in
a bound state calculation to determine the decay rate. Our result for the
two-loop correction is in units of times the
lowest order rate. This is consistent with but more precise than the result
of a previous calculation.Comment: 26 pages, 7 figure
The DIANA underground accelerator facility project at DUSEL laboratory
The DIANA project (Dakota Ion Accelerators for Nuclear Astrophysics) is a collaboration between the University of Notre Dame, Colorado School of Mines, Regis University, University of North Carolina, Western Michigan University, and Lawrence Berkeley National Laboratory, to build a next generation nuclear astrophysics accelerator facility deep underground. The DIANA accelerator facility is being designed to achieve large laboratory reaction rates by delivering high ion beam currents (up to 100 mA) to a high density (up to 1018 atoms/cm2), super-sonic jet-gas target. The accelerator developments of the DIANA facility are presented here
New filovirus disease classification and nomenclature
Filoviruses, the members of the family Filoviridae, are
currently classified into one proposed and five established
genera (Supplementary Table 1). Of the twelve
described filoviruses, six have been identified as aetiological
agents of naturally occurring human disease
outbreaks
Recommended from our members
Managing Performance Analysis with Dynamic Statistical Projection Pursuit
Computer systems and applications are growing more complex. Consequently, performance analysis has become more difficult due to the complex, transient interrelationships among runtime components. To diagnose these types of performance issues, developers must use detailed instrumentation to capture a large number of performance metrics. Unfortunately, this instrumentation may actually influence the performance analysis, leading the developer to an ambiguous conclusion. In this paper, we introduce a technique for focusing a performance analysis on interesting performance metrics. This technique, called dynamic statistical projection pursuit, identifies interesting performance metrics that the monitoring system should capture across some number of processors. By reducing the number of performance metrics, projection pursuit can limit the impact of instrumentation on the performance of the target system and can reduce the volume of performance data
Recommended from our members
Local Discovery of System Architecture - Application Parameter Sensitivity: An Empirical Technique for Adaptive Grid Applications
This study presents a technique that can significantly improve the performance of a distributed application by allowing the application to locally adapt to architectural characteristics of distinct resources in a distributed system. Application performance is sensitive to system architecture-application parameter pairings. In a distributed or Grid enabled application, a single parameter configuration for the whole application will not always be optimal for every participating resource. In particular, some configurations can significantly degrade performance. Furthermore, the behavior of a system may change during the course of the run. The technique described here provides an automated mechanism for run-time adaptation of application parameters to the local system architecture. Using a scaled-down simulation of a Monte Carlo physics code, we demonstrate that this technique can conservatively achieve speedups up to 65% on individual resources and may even provide order of magnitude speedup in the extreme case
Performance of RDMA-capable storage protocols on wide-area network
Because of its high throughput, low CPU utilization, and direct data placement, RDMA (Remote Direct Memory Access) has been adopted for transport in a number of storage protocols, such as NFS and iSCSI. In this presentation, we provide a performance evaluation of RDMA-based NFS and iSCSI on Wide-Area Network (WAN). We show that these protocols, though benefit from RDMA on Local Area Network (LAN) and on WAN of short distance, are faced with a number of challenges to achieve good performance on long distance WAN. This is because of (a) the low performance of RDMA reads on WAN, (b) the small 4 KB chunks used in NFS over RDMA, and(c)the lack of RDMA capability in handling discontinuous data. Our experimental results document the performance behavior of these RDMA-based storage protocols on WAN
- …