13 research outputs found

    Language Constructs for Data Partitioning and Distribution

    Get PDF

    Task-based Runtime Optimizations Towards High Performance Computing Applications

    Get PDF
    The last decades have witnessed a rapid improvement of computational capabilities in high-performance computing (HPC) platforms thanks to hardware technology scaling. HPC architectures benefit from mainstream advances on the hardware with many-core systems, deep hierarchical memory subsystem, non-uniform memory access, and an ever-increasing gap between computational power and memory bandwidth. This has necessitated continuous adaptations across the software stack to maintain high hardware utilization. In this HPC landscape of potentially million-way parallelism, task-based programming models associated with dynamic runtime systems are becoming more popular, which fosters developers’ productivity at extreme scale by abstracting the underlying hardware complexity. In this context, this dissertation highlights how a software bundle powered by a task-based programming model can address the heterogeneous workloads engendered by HPC applications., i.e., data redistribution, geospatial modeling and 3D unstructured mesh deformation here. Data redistribution aims to reshuffle data to optimize some objective for an algorithm, whose objective can be multi-dimensional, such as improving computational load balance or decreasing communication volume or cost, with the ultimate goal of increasing the efficiency and therefore reducing the time-to-solution for the algorithm. Geostatistical modeling, one of the prime motivating applications for exascale computing, is a technique for predicting desired quantities from geographically distributed data, based on statistical models and optimization of parameters. Meshing the deformable contour of moving 3D bodies is an expensive operation that can cause huge computational challenges in fluid-structure interaction (FSI) applications. Therefore, in this dissertation, Redistribute-PaRSEC, ExaGeoStat-PaRSEC and HiCMA-PaRSEC are proposed to efficiently tackle these HPC applications respectively at extreme scale, and they are evaluated on multiple HPC clusters, including AMD-based, Intel-based, Arm-based CPU systems and IBM-based multi-GPU system. This multidisciplinary work emphasizes the need for runtime systems to go beyond their primary responsibility of task scheduling on massively parallel hardware system for servicing the next-generation scientific applications

    OpenFPM: A scalable environment for particle and particle-mesh codes on parallel computers

    Get PDF
    Scalable and efficient numerical simulations continue to gain importance, as computation is firmly established tool of discovery, together with theory and experiment. Meanwhile, the performance of computing hardware grows with increasing heterogeneous hardware, enabling simulations of ever more complex models. However, efficiently implementing scalable codes on heterogeneous, distributed hardware systems becomes the bottleneck. This bottleneck can be alleviated by intermediate software layers that provide higher-level abstractions closer to the problem domain, hence allowing the computational scientist to focus on the simulation. Here, we present OpenFPM, an open and scalable framework that provides an abstraction layer for numerical simulations using particles and/or meshes. OpenFPM provides transparent and scalable infrastructure for shared-memory and distributed-memory implementations of particles-only and hybrid particle-mesh simulations of both discrete and continuous models, as well as non-simulation codes. This infrastructure is complemented with frequently used numerical routines, as well as interfaces to third-party libraries. This thesis will present the architecture and design of OpenFPM, detail the underlying abstractions, and benchmark the framework in applications ranging from Smoothed-Particle Hydrodynamics (SPH) to Molecular Dynamics (MD), Discrete Element Methods (DEM), Vortex Methods, stencil codes, high-dimensional Monte Carlo sampling (CMA-ES), and Reaction-Diffusion solvers, comparing it to the current state of the art and existing software frameworks

    The STAPL Parallel Container Framework

    Get PDF
    The Standard Template Adaptive Parallel Library (STAPL) is a parallel programming infrastructure that extends C with support for parallelism. STAPL provides a run-time system, a collection of distributed data structures (pContainers) and parallel algorithms (pAlgorithms), and a generic methodology for extending them to provide customized functionality. Parallel containers are data structures addressing issues related to data partitioning, distribution, communication, synchronization, load balancing, and thread safety. This dissertation presents the STAPL Parallel Container Framework (PCF), which is designed to facilitate the development of generic parallel containers. We introduce a set of concepts and a methodology for assembling a pContainer from existing sequential or parallel containers without requiring the programmer to deal with concurrency or data distribution issues. The STAPL PCF provides a large number of basic data parallel structures (e.g., pArray, pList, pVector, pMatrix, pGraph, pMap, pSet). The STAPL PCF is distinguished from existing work by offering a class hierarchy and a composition mechanism which allows users to extend and customize the current container base for improved application expressivity and performance. We evaluate the performance of the STAPL pContainers on various parallel machines including a massively parallel CRAY XT4 system and an IBM P5-575 cluster. We show that the pContainer methods, generic pAlgorithms, and different applications, all provide good scalability on more than 10^4 processors

    Interaction of a Turbulent Wind with Ocean Surface Waves - Numerical Modeling

    Get PDF
    Wave-modeling can be categorized in terms of different scales and theoretical frameworks. This dissertation focuses on the numerical modeling of wind-wave generation and its effects on wave growth and propagations. As categorized by scales and methodologies, wind-wave modeling in this dissertation covers two main topics: 1) Large-scale modeling: wind-wave development in real seas. As a phase-average model, SWAN is employed to study the wind-wave environment in the Persian Gulf and Qatar. The wind-wave generation is parameterized as source terms in a spectral model. The special wind condition, called shamal, is particularly investigated. An experimental tower is installed around Doha Port, and by using video imagery, the in situ wave features are extracted and compared. 2) Small-scale modeling: de-tailed wave development using CFD (Computational Fluid Dynamics). A curvilinear surface-fitted moving grid model for three-dimensional Navier-Stokes equations is developed and used to simulate linear and non-linear waves with fully nonlinear surface conditions. Also, by simplifying it to a fixed rectilinear grid based on Cartesian formulations, a DNS (Direct Numerical Simulation) model is developed with an air-water fully-coupled domain and improved coupled interface conditions. By using this DNS model, the detail of wind-wave generation is investigated from still water and the applied top shear wind. For the second topic, the CFD problems are solved by an in-house numerical tool, SPX. SPX is a general PDE (Partial Differential Equations) framework, developed by using C++1y (shortened form of C++11/14/17), currently aiming at the structural domain. It is designed by modern software methodologies, such as generic programming, meta-programming and object-oriented programming. In addition, concept-based generic programming, an ongoing advanced software technology, is first introduced into the PDE numerical tool design. By using these modern design methodologies, all significant components used for solving PDE, particularly for fluid and wave problems, are all implemented in SPX. These components include high-performance numerical array, implicit solvers, grids, differential basis and operators, time integrators, and system infrastructures such as serializations and timer. On structured domain, a general PDE can be expressed by the arbitrary combination of any general differential operator and any arithmetic operator, which is the most challenging part of SPX design. This research proposes a general stencil operator design that integrates with the concept-based expression template. It is successfully demonstrated that the proposed design can automatically deduce the resulting stencils to represent the resulting field operator by giving an arbitrary PDE expression at any given grid point. With the deduced stencils, the user-defined PDE expression is therefore, numerically-solvable by using any solver. In consequence, SPX can be easily applied to any user-defined PDE problem on structural grids with arbitrary user-specified numerical components. Its design shows high flexibility and re-usability without sacrificing efficiency. The development of SPX, therefore, justifies the success of C++-Concept applications on the large-scale numerical framework design

    Efficient Multidimensional Data Redistribution for Resizable Parallel Computations

    No full text
    Traditional parallel schedulers running on cluster supercomputers support only static scheduling, where the number of processors allocated to an application remains fixed throughout the execution of the job. This results in underutilization of idle system resources thereby decreasing overall system throughput. In our research, we have developed a prototype framework called ReSHAPE, which supports dynamic resizing of parallel MPI applications executing on distributed memory platforms. The resizing library in ReSHAPE includes support for releasing and acquiring processors and efficiently redistributing application state to a new set of processors. In this paper, we derive an algorithm for redistributing two-dimensional block-cyclic arrays from P to Q processors, organized as 2-D processor grids. The algorithm ensures a contention-free communication schedule for data redistribution if Pr ≤ Qr and Pc ≤ Qc. In other cases, the algorithm implements circular row and column shifts on the communication schedule to minimize node contention

    Value Creation through Co-Opetition in Service Networks

    Get PDF
    Well-defined interfaces and standardization allow for the composition of single Web services into value-added complex services. Such complex Web Services are increasingly traded via agile marketplaces, facilitating flexible recombination of service modules to meet heterogeneous customer demands. In order to coordinate participants, this work introduces a mechanism design approach - the co-opetition mechanism - that is tailored to requirements imposed by a networked and co-opetitive environment

    CORPORATE SOCIAL RESPONSIBILITY IN ROMANIA

    Get PDF
    The purpose of this paper is to identify the main opportunities and limitations of corporate social responsibility (CSR). The survey was defined with the aim to involve the highest possible number of relevant CSR topics and give the issue a more wholesome perspective. It provides a basis for further comprehension and deeper analyses of specific CSR areas. The conditions determining the success of CSR in Romania have been defined in the paper on the basis of the previously cumulative knowledge as well as the results of various researches. This paper provides knowledge which may be useful in the programs promoting CSR.Corporate social responsibility, Supportive policies, Romania

    Coastal management and adaptation: an integrated data-driven approach

    Get PDF
    Coastal regions are some of the most exposed to environmental hazards, yet the coast is the preferred settlement site for a high percentage of the global population, and most major global cities are located on or near the coast. This research adopts a predominantly anthropocentric approach to the analysis of coastal risk and resilience. This centres on the pervasive hazards of coastal flooding and erosion. Coastal management decision-making practices are shown to be reliant on access to current and accurate information. However, constraints have been imposed on information flows between scientists, policy makers and practitioners, due to a lack of awareness and utilisation of available data sources. This research seeks to tackle this issue in evaluating how innovations in the use of data and analytics can be applied to further the application of science within decision-making processes related to coastal risk adaptation. In achieving this aim a range of research methodologies have been employed and the progression of topics covered mark a shift from themes of risk to resilience. The work focuses on a case study region of East Anglia, UK, benefiting from the input of a partner organisation, responsible for the region’s coasts: Coastal Partnership East. An initial review revealed how data can be utilised effectively within coastal decision-making practices, highlighting scope for application of advanced Big Data techniques to the analysis of coastal datasets. The process of risk evaluation has been examined in detail, and the range of possibilities afforded by open source coastal datasets were revealed. Subsequently, open source coastal terrain and bathymetric, point cloud datasets were identified for 14 sites within the case study area. These were then utilised within a practical application of a geomorphological change detection (GCD) method. This revealed how analysis of high spatial and temporal resolution point cloud data can accurately reveal and quantify physical coastal impacts. Additionally, the research reveals how data innovations can facilitate adaptation through insurance; more specifically how the use of empirical evidence in pricing of coastal flood insurance can result in both communication and distribution of risk. The various strands of knowledge generated throughout this study reveal how an extensive range of data types, sources, and advanced forms of analysis, can together allow coastal resilience assessments to be founded on empirical evidence. This research serves to demonstrate how the application of advanced data-driven analytical processes can reduce levels of uncertainty and subjectivity inherent within current coastal environmental management practices. Adoption of methods presented within this research could further the possibilities for sustainable and resilient management of the incredibly valuable environmental resource which is the coast
    corecore