809 research outputs found
Tackling Exascale Software Challenges in Molecular Dynamics Simulations with GROMACS
GROMACS is a widely used package for biomolecular simulation, and over the
last two decades it has evolved from small-scale efficiency to advanced
heterogeneous acceleration and multi-level parallelism targeting some of the
largest supercomputers in the world. Here, we describe some of the ways we have
been able to realize this through the use of parallelization on all levels,
combined with a constant focus on absolute performance. Release 4.6 of GROMACS
uses SIMD acceleration on a wide range of architectures, GPU offloading
acceleration, and both OpenMP and MPI parallelism within and between nodes,
respectively. The recent work on acceleration made it necessary to revisit the
fundamental algorithms of molecular simulation, including the concept of
neighborsearching, and we discuss the present and future challenges we see for
exascale simulation - in particular a very fine-grained task parallelism. We
also discuss the software management, code peer review and continuous
integration testing required for a project of this complexity.Comment: EASC 2014 conference proceedin
High Lundquist Number Simulations of Parker\u27s Model of Coronal Heating: Scaling and Current Sheet Statistics Using Heterogeneous Computing Architectures
Parker\u27s model [Parker, Astrophys. J., 174, 499 (1972)] is one of the most discussed mechanisms for coronal heating and has generated much debate. We have recently obtained new scaling results for a 2D version of this problem suggesting that the heating rate becomes independent of resistivity in a statistical steady state [Ng and Bhattacharjee, Astrophys. J., 675, 899 (2008)]. Our numerical work has now been extended to 3D using high resolution MHD numerical simulations. Random photospheric footpoint motion is applied for a time much longer than the correlation time of the motion to obtain converged average coronal heating rates. Simulations are done for different values of the Lundquist number to determine scaling. In the high-Lundquist number limit (S \u3e 1000), the coronal heating rate obtained is consistent with a trend that is independent of the Lundquist number, as predicted by previous analysis and 2D simulations. We will present scaling analysis showing that when the dissipation time is comparable or larger than the correlation time of the random footpoint motion, the heating rate tends to become independent of Lundquist number, and that the magnetic energy production is also reduced significantly. We also present a comprehensive reprogramming of our simulation code to run on NVidia graphics processing units using the Compute Unified Device Architecture (CUDA) and report code performance on several large scale heterogenous machines
SIVIC: Open-Source, Standards-Based Software for DICOM MR Spectroscopy Workflows
Quantitative analysis of magnetic resonance spectroscopic imaging (MRSI) data provides maps of metabolic parameters that show promise for improving medical diagnosis and therapeutic monitoring. While anatomical images are routinely reconstructed on the scanner, formatted using the DICOM standard, and interpreted using PACS workstations, this is not the case for MRSI data. The evaluation of MRSI data is made more complex because files are typically encoded with vendor-specific file formats and there is a lack of standardized tools for reconstruction, processing, and visualization. SIVIC is a flexible open-source software framework and application suite that enables a complete scanner-to-PACS workflow for evaluation and interpretation of MRSI data. It supports conversion of vendor-specific formats into the DICOM MR spectroscopy (MRS) standard, provides modular and extensible reconstruction and analysis pipelines, and provides tools to support the unique visualization requirements associated with such data. Workflows are presented which demonstrate the routine use of SIVIC to support the acquisition, analysis, and delivery to PACS of clinical 1H MRSI datasets at UCSF
Agent-based resource management for grid computing
A computational grid is a hardware and software infrastructure that provides
dependable, consistent, pervasive, and inexpensive access to high-end
computational capability. An ideal grid environment should provide access to the
available resources in a seamless manner. Resource management is an important
infrastructural component of a grid computing environment. The overall aim of
resource management is to efficiently schedule applications that need to utilise the
available resources in the grid environment. Such goals within the high
performance community will rely on accurate performance prediction capabilities.
An existing toolkit, known as PACE (Performance Analysis and Characterisation
Environment), is used to provide quantitative data concerning the performance of
sophisticated applications running on high performance resources. In this thesis an
ASCI (Accelerated Strategic Computing Initiative) kernel application, Sweep3D,
is used to illustrate the PACE performance prediction capabilities. The validation
results show that a reasonable accuracy can be obtained, cross-platform
comparisons can be easily undertaken, and the process benefits from a rapid
evaluation time. While extremely well-suited for managing a locally distributed
multi-computer, the PACE functions do not map well onto a wide-area
environment, where heterogeneity, multiple administrative domains, and communication irregularities dramatically complicate the job of resource
management. Scalability and adaptability are two key challenges that must be
addressed.
In this thesis, an A4 (Agile Architecture and Autonomous Agents) methodology is
introduced for the development of large-scale distributed software systems with
highly dynamic behaviours. An agent is considered to be both a service provider
and a service requestor. Agents are organised into a hierarchy with service
advertisement and discovery capabilities. There are four main performance
metrics for an A4 system: service discovery speed, agent system efficiency,
workload balancing, and discovery success rate.
Coupling the A4 methodology with PACE functions, results in an Agent-based
Resource Management System (ARMS), which is implemented for grid
computing. The PACE functions supply accurate performance information (e. g.
execution time) as input to a local resource scheduler on the fly. At a meta-level,
agents advertise their service information and cooperate with each other to
discover available resources for grid-enabled applications. A Performance
Monitor and Advisor (PMA) is also developed in ARMS to optimise the
performance of the agent behaviours.
The PMA is capable of performance modelling and simulation about the agents in
ARMS and can be used to improve overall system performance. The PMA can
monitor agent behaviours in ARMS and reconfigure them with optimised
strategies, which include the use of ACTs (Agent Capability Tables), limited
service lifetime, limited scope for service advertisement and discovery, agent
mobility and service distribution, etc.
The main contribution of this work is that it provides a methodology and
prototype implementation of a grid Resource Management System (RMS). The
system includes a number of original features that cannot be found in existing
research solutions
The cosmological simulation code GADGET-2
We discuss the cosmological simulation code GADGET-2, a new massively
parallel TreeSPH code, capable of following a collisionless fluid with the
N-body method, and an ideal gas by means of smoothed particle hydrodynamics
(SPH). Our implementation of SPH manifestly conserves energy and entropy in
regions free of dissipation, while allowing for fully adaptive smoothing
lengths. Gravitational forces are computed with a hierarchical multipole
expansion, which can optionally be applied in the form of a TreePM algorithm,
where only short-range forces are computed with the `tree'-method while
long-range forces are determined with Fourier techniques. Time integration is
based on a quasi-symplectic scheme where long-range and short-range forces can
be integrated with different timesteps. Individual and adaptive short-range
timesteps may also be employed. The domain decomposition used in the
parallelisation algorithm is based on a space-filling curve, resulting in high
flexibility and tree force errors that do not depend on the way the domains are
cut. The code is efficient in terms of memory consumption and required
communication bandwidth. It has been used to compute the first cosmological
N-body simulation with more than 10^10 dark matter particles, reaching a
homogeneous spatial dynamic range of 10^5 per dimension in a 3D box. It has
also been used to carry out very large cosmological SPH simulations that
account for radiative cooling and star formation, reaching total particle
numbers of more than 250 million. We present the algorithms used by the code
and discuss their accuracy and performance using a number of test problems.
GADGET-2 is publicly released to the research community.Comment: submitted to MNRAS, 31 pages, 20 figures (reduced resolution), code
available at http://www.mpa-garching.mpg.de/gadge
On Energy Efficient Computing Platforms
In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms.
As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects.
As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency.
With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption.
Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.Siirretty Doriast
- …