24,223 research outputs found
Research and Education in Computational Science and Engineering
Over the past two decades the field of computational science and engineering
(CSE) has penetrated both basic and applied research in academia, industry, and
laboratories to advance discovery, optimize systems, support decision-makers,
and educate the scientific and engineering workforce. Informed by centuries of
theory and experiment, CSE performs computational experiments to answer
questions that neither theory nor experiment alone is equipped to answer. CSE
provides scientists and engineers of all persuasions with algorithmic
inventions and software systems that transcend disciplines and scales. Carried
on a wave of digital technology, CSE brings the power of parallelism to bear on
troves of data. Mathematics-based advanced computing has become a prevalent
means of discovery and innovation in essentially all areas of science,
engineering, technology, and society; and the CSE community is at the core of
this transformation. However, a combination of disruptive
developments---including the architectural complexity of extreme-scale
computing, the data revolution that engulfs the planet, and the specialization
required to follow the applications to new frontiers---is redefining the scope
and reach of the CSE endeavor. This report describes the rapid expansion of CSE
and the challenges to sustaining its bold advances. The report also presents
strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie
The future of computing beyond Moore's Law.
Moore's Law is a techno-economic model that has enabled the information technology industry to double the performance and functionality of digital electronics roughly every 2 years within a fixed cost, power and area. Advances in silicon lithography have enabled this exponential miniaturization of electronics, but, as transistors reach atomic scale and fabrication costs continue to rise, the classical technological driver that has underpinned Moore's Law for 50 years is failing and is anticipated to flatten by 2025. This article provides an updated view of what a post-exascale system will look like and the challenges ahead, based on our most recent understanding of technology roadmaps. It also discusses the tapering of historical improvements, and how it affects options available to continue scaling of successors to the first exascale machine. Lastly, this article covers the many different opportunities and strategies available to continue computing performance improvements in the absence of historical technology drivers. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'
The Universe at Extreme Scale: Multi-Petaflop Sky Simulation on the BG/Q
Remarkable observational advances have established a compelling
cross-validated model of the Universe. Yet, two key pillars of this model --
dark matter and dark energy -- remain mysterious. Sky surveys that map billions
of galaxies to explore the `Dark Universe', demand a corresponding
extreme-scale simulation capability; the HACC (Hybrid/Hardware Accelerated
Cosmology Code) framework has been designed to deliver this level of
performance now, and into the future. With its novel algorithmic structure,
HACC allows flexible tuning across diverse architectures, including accelerated
and multi-core systems.
On the IBM BG/Q, HACC attains unprecedented scalable performance -- currently
13.94 PFlops at 69.2% of peak and 90% parallel efficiency on 1,572,864 cores
with an equal number of MPI ranks, and a concurrency of 6.3 million. This level
of performance was achieved at extreme problem sizes, including a benchmark run
with more than 3.6 trillion particles, significantly larger than any
cosmological simulation yet performed.Comment: 11 pages, 11 figures, final version of paper for talk presented at
SC1
Recommended from our members
Preparing sparse solvers for exascale computing.
Sparse solvers provide essential functionality for a wide variety of scientific applications. Highly parallel sparse solvers are essential for continuing advances in high-fidelity, multi-physics and multi-scale simulations, especially as we target exascale platforms. This paper describes the challenges, strategies and progress of the US Department of Energy Exascale Computing project towards providing sparse solvers for exascale computing platforms. We address the demands of systems with thousands of high-performance node devices where exposing concurrency, hiding latency and creating alternative algorithms become essential. The efforts described here are works in progress, highlighting current success and upcoming challenges. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'
Tree Parity Machine Rekeying Architectures
The necessity to secure the communication between hardware components in
embedded systems becomes increasingly important with regard to the secrecy of
data and particularly its commercial use. We suggest a low-cost (i.e. small
logic-area) solution for flexible security levels and short key lifetimes. The
basis is an approach for symmetric key exchange using the synchronisation of
Tree Parity Machines. Fast successive key generation enables a key exchange
within a few milliseconds, given realistic communication channels with a
limited bandwidth. For demonstration we evaluate characteristics of a
standard-cell ASIC design realisation as IP-core in 0.18-micrometer
CMOS-technology
- …