55 research outputs found
Towards Intelligent Runtime Framework for Distributed Heterogeneous Systems
Scientific applications strive for increased memory and computing performance, requiring massive amounts of data and time to produce results. Applications utilize large-scale, parallel computing platforms with advanced architectures to accommodate their needs. However, developing performance-portable applications for modern, heterogeneous platforms requires lots of effort and expertise in both the application and systems domains. This is more relevant for unstructured applications whose workflow is not statically predictable due to their heavily data-dependent nature. One possible solution for this problem is the introduction of an intelligent Domain-Specific Language (iDSL) that transparently helps to maintain correctness, hides the idiosyncrasies of lowlevel hardware, and scales applications. An iDSL includes domain-specific language constructs, a compilation toolchain, and a runtime providing task scheduling, data placement, and workload balancing across and within heterogeneous nodes. In this work, we focus on the runtime framework. We introduce a novel design and extension of a runtime framework, the Parallel Runtime Environment for Multicore Applications. In response to the ever-increasing intra/inter-node concurrency, the runtime system supports efficient task scheduling and workload balancing at both levels while allowing the development of custom policies. Moreover, the new framework provides abstractions supporting the utilization of heterogeneous distributed nodes consisting of CPUs and GPUs and is extensible to other devices. We demonstrate that by utilizing this work, an application (or the iDSL) can scale its performance on heterogeneous exascale-era supercomputers with minimal effort. A future goal for this framework (out of the scope of this thesis) is to be integrated with machine learning to improve its decision-making and performance further. As a bridge to this goal, since the framework is under development, we experiment with data from Nuclear Physics Particle Accelerators and demonstrate the significant improvements achieved by utilizing machine learning in the hit-based track reconstruction process
ATHENA Research Book, Volume 2
ATHENA European University is an association of nine higher education institutions with the mission of promoting excellence in research and innovation by enabling international cooperation. The acronym ATHENA stands for Association of Advanced Technologies in Higher Education. Partner institutions are from France, Germany, Greece, Italy, Lithuania, Portugal and Slovenia: University of Orléans, University of Siegen, Hellenic Mediterranean University, Niccolò Cusano University, Vilnius Gediminas Technical University, Polytechnic Institute of Porto and University of Maribor. In 2022, two institutions joined the alliance: the Maria Curie-Skłodowska University from Poland and the University of Vigo from Spain. Also in 2022, an institution from Austria joined the alliance as an associate member: Carinthia University of Applied Sciences. This research book presents a selection of the research activities of ATHENA University's partners. It contains an overview of the research activities of individual members, a selection of the most important bibliographic works of members, peer-reviewed student theses, a descriptive list of ATHENA lectures and reports from individual working sections of the ATHENA project. The ATHENA Research Book provides a platform that encourages collaborative and interdisciplinary research projects by advanced and early career researchers
Task-based Runtime Optimizations Towards High Performance Computing Applications
The last decades have witnessed a rapid improvement of computational capabilities in high-performance computing (HPC) platforms thanks to hardware technology scaling. HPC architectures benefit from mainstream advances on the hardware with many-core systems, deep hierarchical memory subsystem, non-uniform memory access, and an ever-increasing gap between computational power and memory bandwidth. This has necessitated continuous adaptations across the software stack to maintain high hardware utilization. In this HPC landscape of potentially million-way parallelism, task-based programming models associated with dynamic runtime systems are becoming more popular, which fosters developers’ productivity at extreme scale by abstracting the underlying hardware complexity.
In this context, this dissertation highlights how a software bundle powered by a task-based programming model can address the heterogeneous workloads engendered by HPC applications., i.e., data redistribution, geospatial modeling and 3D unstructured mesh deformation here. Data redistribution aims to reshuffle data to optimize some objective for an algorithm, whose objective can be multi-dimensional, such as improving computational load balance or decreasing communication volume or cost, with the ultimate goal of increasing the efficiency and therefore reducing the time-to-solution for the algorithm. Geostatistical modeling, one of the prime motivating applications for exascale computing, is a technique for predicting desired quantities from geographically distributed data, based on statistical models and optimization of parameters. Meshing the deformable contour of moving 3D bodies is an expensive operation that can cause huge computational challenges in fluid-structure interaction (FSI) applications. Therefore, in this dissertation, Redistribute-PaRSEC, ExaGeoStat-PaRSEC and HiCMA-PaRSEC are proposed to efficiently tackle these HPC applications respectively at extreme scale, and they are evaluated on multiple HPC clusters, including AMD-based, Intel-based, Arm-based CPU systems and IBM-based multi-GPU system. This multidisciplinary work emphasizes the need for runtime systems to go beyond their primary responsibility of task scheduling on massively parallel hardware system for servicing the next-generation scientific applications
Digital Image Processing Applications
Digital image processing can refer to a wide variety of techniques, concepts, and applications of different types of processing for different purposes. This book provides examples of digital image processing applications and presents recent research on processing concepts and techniques. Chapters cover such topics as image processing in medical physics, binarization, video processing, and more
High-Performance Modelling and Simulation for Big Data Applications
This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
AUTOMATING DATA-LAYOUT DECISIONS IN DOMAIN-SPECIFIC LANGUAGES
A long-standing challenge in High-Performance Computing (HPC) is the simultaneous achievement of programmer productivity and hardware computational efficiency. The challenge has been exacerbated by the onset of multi- and many-core CPUs and accelerators. Only a few expert programmers have been able to hand-code domain-specific data transformations and vectorization schemes needed to extract the best possible performance on such architectures. In this research, we examined the possibility of automating these methods by developing a Domain-Specific Language (DSL) framework. Our DSL approach extends C++14 by embedding into it a high-level data-parallel array language, and by using a domain-specific compiler to compile to hybrid-parallel code. We also implemented an array index-space transformation algebra within this high-level array language to manipulate array data-layouts and data-distributions. The compiler introduces a novel method for SIMD auto-vectorization based on array data-layouts. Our new auto-vectorization technique is shown to outperform the default auto-vectorization strategy by up to 40% for stencil computations. The compiler also automates distributed data movement with overlapping of local compute with remote data movement using polyhedral integer set analysis. Along with these main innovations, we developed a new technique using C++ template metaprogramming for developing embedded DSLs using C++. We also proposed a domain-specific compiler intermediate representation that simplifies data flow analysis of abstract DSL constructs. We evaluated our framework by constructing a DSL for the HPC grand-challenge domain of lattice quantum chromodynamics. Our DSL yielded performance gains of up to twice the flop rate over existing production C code for selected kernels. This gain in performance was obtained while using less than one-tenth the lines of code. The performance of this DSL was also competitive with the best hand-optimized and hand-vectorized code, and is an order of magnitude better than existing production DSLs.Doctor of Philosoph
The Fluid Dynamics of Heart Development: The effect of morphology on flow at several stages
Proper cardiogenesis requires a delicate balance between genetic and environmental (epigenetic) signals, and mechanical forces. While many cellular biologists and geneticists have extensively studied heart morphogenesis using various experimental techniques, only a few scientists have begun using mathematical modeling as a tool for studying cardiogenic events. Hemodynamic processes, such as vortex formation, are important in the generation of shear at the endothelial surface layer and strains at the epithelial layer, which aid in proper morphology and functionality. The purpose of this thesis is to study the underlying fluid dynamics in various stages on heart development, in particular, the morphogenic stages when the heart is a linear heart tube as well as during the onset of ventricular trabeculation. Previous mathematical models of the linear heart tube stage have focused on mechanisms of valveless pumping, whether dynamic suction pumping (impedance pumping) or peristalsis; however, they all have neglected hematocrit. The impact of blood cells was examined by fluid-structure interaction simulations, via the immersed boundary method. Moreover, electrophysiology models were incorporated into an immersed boundary framework, and bifurcations within the morphospace were studied that give rise to a spectrum of pumping regimes, with peristaltic-like waves of contraction and impedance pumping at the extremes. Lastly, effects of resonant pumping, damping, and boundary inertial effects (added mass) were studied for dynamic suction pumping. The other stage of heart development considered here is during the onset of ventricular trabeculation. This occurs after the heart has undergone the cardiac looping stage and now is a multi-chambered pumping system with primitive endocardial cushions, which act as precursors to valve leaflets. Trabeculation introduces complex morphology onto the inner lining of the endocardium in the ventricle. This transition of a smooth endocardium to one with complex geometry, may have significant effect on the intracardial fluid dynamics and stress distribution within emrbyonic hearts. Previous studies have not included these geometric perturbations along the ventricular endocardium. The role of trabeculae on intracardial (and intertrabecular) flows was studied using two different mathematical models implemented within an immersed boundary framework. It is shown that the trabecular geometry and number density have a significant effect on such flows. Furthermore this thesis also focused attention to the creation of software for scientists and engineers to perform fluid-structure interaction simulations at an accelerated rate, in user-friendly environments for beginner programmers, e.g., MATLAB or Python 3.5. The software, IB2d, performs fully coupled fluid-structure interaction problems using Charles Peskin's immersed boundary method. IB2d is capable of running a vast range of biomechanics models and contains multiple options for constructing material properties of the fiber structure, advection-diffusion of a chemical gradient, muscle mechanics models, Boussinesq approximations, and artificial forcing to drive boundaries with a preferred motion. The software currently contains over 50 examples, ranging from rubber-bands oscillating to flow past a cylinder to a simple aneurysm model to falling spheres in a chemical gradient to jellyfish locomotion to a heart tube pumping coupled with electrophysiology, muscle, and calcium dynamics modelsDoctor of Philosoph
GSI Scientific Report 2016
PLEASE GO TO FILES TO SELECT YOUR DOWNLOAD SECTION. Lience: https://creativecommons.org/licenses/by/4.0
- …