147 research outputs found
A high-performance open-source framework for multiphysics simulation and adjoint-based shape and topology optimization
The first part of this thesis presents the advances made in the Open-Source software SU2,
towards transforming it into a high-performance framework for design and optimization of
multiphysics problems. Through this work, and in collaboration with other authors, a tenfold
performance improvement was achieved for some problems. More importantly, problems that
had previously been impossible to solve in SU2, can now be used in numerical optimization
with shape or topology variables. Furthermore, it is now exponentially simpler to study new
multiphysics applications, and to develop new numerical schemes taking advantage of modern
high-performance-computing systems.
In the second part of this thesis, these capabilities allowed the application of topology optimiza-
tion to medium scale fluid-structure interaction problems, using high-fidelity models (nonlinear
elasticity and Reynolds-averaged Navier-Stokes equations), which had not been done before
in the literature. This showed that topology optimization can be used to target aerodynamic
objectives, by tailoring the interaction between fluid and structure. However, it also made ev-
ident the limitations of density-based methods for this type of problem, in particular, reliably
converging to discrete solutions. This was overcome with new strategies to both guarantee and
accelerate (i.e. reduce the overall computational cost) the convergence to discrete solutions in
fluid-structure interaction problems.Open Acces
Research and Education in Computational Science and Engineering
Over the past two decades the field of computational science and engineering
(CSE) has penetrated both basic and applied research in academia, industry, and
laboratories to advance discovery, optimize systems, support decision-makers,
and educate the scientific and engineering workforce. Informed by centuries of
theory and experiment, CSE performs computational experiments to answer
questions that neither theory nor experiment alone is equipped to answer. CSE
provides scientists and engineers of all persuasions with algorithmic
inventions and software systems that transcend disciplines and scales. Carried
on a wave of digital technology, CSE brings the power of parallelism to bear on
troves of data. Mathematics-based advanced computing has become a prevalent
means of discovery and innovation in essentially all areas of science,
engineering, technology, and society; and the CSE community is at the core of
this transformation. However, a combination of disruptive
developments---including the architectural complexity of extreme-scale
computing, the data revolution that engulfs the planet, and the specialization
required to follow the applications to new frontiers---is redefining the scope
and reach of the CSE endeavor. This report describes the rapid expansion of CSE
and the challenges to sustaining its bold advances. The report also presents
strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie
Multilayered Heterogeneous Parallelism Applied to Atmospheric Constituent Transport Simulation
Heterogeneous multicore chipsets with many levels of parallelism are becoming increasingly common in high-performance computing systems. Effective use of parallelism in these new chipsets constitutes the challenge facing a new generation of large scale scientific computing applications. This study examines methods for improving the performance of two-dimensional and three-dimensional atmospheric constituent transport simulation on the Cell Broadband Engine Architecture (CBEA). A function offloading approach is used in a 2D transport module, and a vector stream processing approach is used in a 3D transport module. Two methods for transferring incontiguous data between main memory and accelerator local storage are compared. By leveraging the heterogeneous parallelism of the CBEA, the 3D transport module achieves performance comparable to two nodes of an IBM BlueGene/P, or eight Intel Xeon cores, on a single PowerXCell 8i chip. Module performance on two CBEA systems, an IBM BlueGene/P, and an eight-core shared-memory Intel Xeon workstation are given
2022 Review of Data-Driven Plasma Science
Data-driven science and technology offer transformative tools and methods to science. This review article highlights the latest development and progress in the interdisciplinary field of data-driven plasma science (DDPS), i.e., plasma science whose progress is driven strongly by data and data analyses. Plasma is considered to be the most ubiquitous form of observable matter in the universe. Data associated with plasmas can, therefore, cover extremely large spatial and temporal scales, and often provide essential information for other scientific disciplines. Thanks to the latest technological developments, plasma experiments, observations, and computation now produce a large amount of data that can no longer be analyzed or interpreted manually. This trend now necessitates a highly sophisticated use of high-performance computers for data analyses, making artificial intelligence and machine learning vital components of DDPS. This article contains seven primary sections, in addition to the introduction and summary. Following an overview of fundamental data-driven science, five other sections cover widely studied topics of plasma science and technologies, i.e., basic plasma physics and laboratory experiments, magnetic confinement fusion, inertial confinement fusion and high-energy-density physics, space and astronomical plasmas, and plasma technologies for industrial and other applications. The final section before the summary discusses plasma-related databases that could significantly contribute to DDPS. Each primary section starts with a brief introduction to the topic, discusses the state-of-the-art developments in the use of data and/or data-scientific approaches, and presents the summary and outlook. Despite the recent impressive signs of progress, the DDPS is still in its infancy. This article attempts to offer a broad perspective on the development of this field and identify where further innovations are required
Evaluating the Potential of Disaggregated Memory Systems for HPC applications
Disaggregated memory is a promising approach that addresses the limitations
of traditional memory architectures by enabling memory to be decoupled from
compute nodes and shared across a data center. Cloud platforms have deployed
such systems to improve overall system memory utilization, but performance can
vary across workloads. High-performance computing (HPC) is crucial in
scientific and engineering applications, where HPC machines also face the issue
of underutilized memory. As a result, improving system memory utilization while
understanding workload performance is essential for HPC operators. Therefore,
learning the potential of a disaggregated memory system before deployment is a
critical step. This paper proposes a methodology for exploring the design space
of a disaggregated memory system. It incorporates key metrics that affect
performance on disaggregated memory systems: memory capacity, local and remote
memory access ratio, injection bandwidth, and bisection bandwidth, providing an
intuitive approach to guide machine configurations based on technology trends
and workload characteristics. We apply our methodology to analyze thirteen
diverse workloads, including AI training, data analysis, genomics, protein,
fusion, atomic nuclei, and traditional HPC bookends. Our methodology
demonstrates the ability to comprehend the potential and pitfalls of a
disaggregated memory system and provides motivation for machine configurations.
Our results show that eleven of our thirteen applications can leverage
injection bandwidth disaggregated memory without affecting performance, while
one pays a rack bisection bandwidth penalty and two pay the system-wide
bisection bandwidth penalty. In addition, we also show that intra-rack memory
disaggregation would meet the application's memory requirement and provide
enough remote memory bandwidth.Comment: The submission builds on the following conference paper: N. Ding, S.
Williams, H.A. Nam, et al. Methodology for Evaluating the Potential of
Disaggregated Memory Systems,2nd International Workshop on RESource
DISaggregation in High-Performance Computing (RESDIS), November 18, 2022. It
is now submitted to the CCPE journal for revie
- …