259 research outputs found
A domain-specific language and matrix-free stencil code for investigating electronic properties of Dirac and topological materials
We introduce PVSC-DTM (Parallel Vectorized Stencil Code for Dirac and
Topological Materials), a library and code generator based on a domain-specific
language tailored to implement the specific stencil-like algorithms that can
describe Dirac and topological materials such as graphene and topological
insulators in a matrix-free way. The generated hybrid-parallel (MPI+OpenMP)
code is fully vectorized using Single Instruction Multiple Data (SIMD)
extensions. It is significantly faster than matrix-based approaches on the node
level and performs in accordance with the roofline model. We demonstrate the
chip-level performance and distributed-memory scalability of basic building
blocks such as sparse matrix-(multiple-) vector multiplication on modern
multicore CPUs. As an application example, we use the PVSC-DTM scheme to (i)
explore the scattering of a Dirac wave on an array of gate-defined quantum
dots, to (ii) calculate a bunch of interior eigenvalues for strong topological
insulators, and to (iii) discuss the photoemission spectra of a disordered Weyl
semimetal.Comment: 16 pages, 2 tables, 11 figure
A Quantitative Approach for Adopting Disaggregated Memory in HPC Systems
Memory disaggregation has recently been adopted in data centers to improve
resource utilization, motivated by cost and sustainability. Recent studies on
large-scale HPC facilities have also highlighted memory underutilization. A
promising and non-disruptive option for memory disaggregation is rack-scale
memory pooling, where shared memory pools supplement node-local memory. This
work outlines the prospects and requirements for adoption and clarifies several
misconceptions. We propose a quantitative method for dissecting application
requirements on the memory system from the top down in three levels, moving
from general, to multi-tier memory systems, and then to memory pooling. We
provide a multi-level profiling tool and LBench to facilitate the quantitative
approach. We evaluate a set of representative HPC workloads on an emulated
platform. Our results show that prefetching activities can significantly
influence memory traffic profiles. Interference in memory pooling has varied
impacts on applications, depending on their access ratios to memory tiers and
arithmetic intensities. Finally, in two case studies, we show the benefits of
our findings at the application and system levels, achieving 50% reduction in
remote access and 13% speedup in BFS, and reducing performance variation of
co-located workloads in interference-aware job scheduling.Comment: Accepted to SC23 (The International Conference for High Performance
Computing, Networking, Storage, and Analysis 2023
- …