10 research outputs found

    Polyhedral+Dataflow Graphs

    Get PDF
    This research presents an intermediate compiler representation that is designed for optimization, and emphasizes the temporary storage requirements and execution schedule of a given computation to guide optimization decisions. The representation is expressed as a dataflow graph that describes computational statements and data mappings within the polyhedral compilation model. The targeted applications include both the regular and irregular scientific domains. The intermediate representation can be integrated into existing compiler infrastructures. A specification language implemented as a domain specific language in C++ describes the graph components and the transformations that can be applied. The visual representation allows users to reason about optimizations. Graph variants can be translated into source code or other representation. The language, intermediate representation, and associated transformations have been applied to improve the performance of differential equation solvers, or sparse matrix operations, tensor decomposition, and structured multigrid methods

    Design Space Exploration and Resource Management of Multi/Many-Core Systems

    Get PDF
    The increasing demand of processing a higher number of applications and related data on computing platforms has resulted in reliance on multi-/many-core chips as they facilitate parallel processing. However, there is a desire for these platforms to be energy-efficient and reliable, and they need to perform secure computations for the interest of the whole community. This book provides perspectives on the aforementioned aspects from leading researchers in terms of state-of-the-art contributions and upcoming trends

    Reducing the Complexity of Heterogeneous Computing: A Unified Approach for Application Development and Runtime Optimization

    Get PDF
    Heterogeneous systems with accelerators promise considerable performance improvements at a lower cost than homogeneous CPU-only systems. However, to benefit from this potential, considerable work is required from developers to integrate them efficiently in an application. This work contributes a new framework implemented with an online-learning runtime system that simplifies development and makes applications more portable, efficient and reliable across different systems

    Workflow models for heterogeneous distributed systems

    Get PDF
    The role of data in modern scientific workflows becomes more and more crucial. The unprecedented amount of data available in the digital era, combined with the recent advancements in Machine Learning and High-Performance Computing (HPC), let computers surpass human performances in a wide range of fields, such as Computer Vision, Natural Language Processing and Bioinformatics. However, a solid data management strategy becomes crucial for key aspects like performance optimisation, privacy preservation and security. Most modern programming paradigms for Big Data analysis adhere to the principle of data locality: moving computation closer to the data to remove transfer-related overheads and risks. Still, there are scenarios in which it is worth, or even unavoidable, to transfer data between different steps of a complex workflow. The contribution of this dissertation is twofold. First, it defines a novel methodology for distributed modular applications, allowing topology-aware scheduling and data management while separating business logic, data dependencies, parallel patterns and execution environments. In addition, it introduces computational notebooks as a high-level and user-friendly interface to this new kind of workflow, aiming to flatten the learning curve and improve the adoption of such methodology. Each of these contributions is accompanied by a full-fledged, Open Source implementation, which has been used for evaluation purposes and allows the interested reader to experience the related methodology first-hand. The validity of the proposed approaches has been demonstrated on a total of five real scientific applications in the domains of Deep Learning, Bioinformatics and Molecular Dynamics Simulation, executing them on large-scale mixed cloud-High-Performance Computing (HPC) infrastructures

    Actor-Oriented Programming for Resource Constrained Multiprocessor Networks on Chip

    Get PDF
    Multiprocessor Networks on Chip (MPNoCs) are an attractive architecture for integrated circuits as they can benefit from the improved performance of ever smaller transistors but are not severely constrained by the poor performance of global on-chip wires. As the number of processors increases it becomes ever more expensive to provide coherent shared memory but this is a foundational assumption of thread-level parallelism. Threaded models of concurrency cannot efficiently address architectures where shared memory is not coherent or does not exist. In this thesis an extended actor oriented programming model is proposed to enable the design of complex and general purpose software for highly parallel and decentralised multiprocessor architectures. This model requires the encapsulation of an execution context and state into isolated Machines which may only initiate communication with one another via explicitly named channels. An emphasis on message passing and strong isolation of computation encourages application structures that are congruent with the nature of non-shared memory multiprocessors, and the model also avoids creating dependences on specific hardware topologies. A realisation of the model called Machine Java is presented to demonstrate the applicability of the model to a general purpose programming language. Applications designed with this framework are shown to be capable of scaling to large numbers of processors and remain independent of the hardware targets. Through the use of an efficient compilation technique, Machine Java is demonstrated to be portable across several architectures and viable even in the highly constrained context of an FPGA hosted MPNoC

    A Software Toolchain for Variability Awareness on Heterogenous Multicore Platforms

    No full text
    Workload allocation in embedded multicore platforms is an increasing challenging issue due to heterogeneity of components and their parallelism. Additionally, the impact of process variations in current and next generation technology nodes is becoming relevant and cannot be compensated at the device or architectural level. Intra-die process variations raising at the core level and platform level makes parallel multicore platforms intrinsically heterogeneous, because the various cores are clocked at different operational frequencies. Power consumption becomes heterogeneous too, both considering dynamic and leakage consumption. In this context, to fully exploit the computational capability of the platform parallelism, variability aware task allocation strategies must be adopted. Despite the consistent research performed to design variability-aware task allocation policies, little effort has been devoted make available to programmers a software toolchain enabling the exploitation of these policies. Such toolchain need to exploit fabrication-level information about core clock speed and power consumption. In this work, we address to present a methodology and the associated toolchain to program in presence of process variability, integrating power and performance variability information in all the steps of the toolchain. To this purpose, the proposed approach is vertically integrated, from high level modelling down to runtime management. Variability information is introduced through a XML configuration file that is exploited by toolchain components to make the appropriate runtime allocation decision. We demonstrate the proposed toolchain using state of art variability-aware task allocation policies on two multicore platforms: i) The MIPS-based GENEPY simulator with 4 and 8 parallel homogeneous cores and ii) The Tegra2-based Zynq platform, where the on-board FPGA has been used to map 10 microblaze slave cores. Experiments show that the proposed toolchain supports the integration of variability awareness in a simple yet effective programming environment

    Software for Exascale Computing - SPPEXA 2016-2019

    Get PDF
    This open access book summarizes the research done and results obtained in the second funding phase of the Priority Program 1648 "Software for Exascale Computing" (SPPEXA) of the German Research Foundation (DFG) presented at the SPPEXA Symposium in Dresden during October 21-23, 2019. In that respect, it both represents a continuation of Vol. 113 in Springer’s series Lecture Notes in Computational Science and Engineering, the corresponding report of SPPEXA’s first funding phase, and provides an overview of SPPEXA’s contributions towards exascale computing in today's sumpercomputer technology. The individual chapters address one or more of the research directions (1) computational algorithms, (2) system software, (3) application software, (4) data management and exploration, (5) programming, and (6) software tools. The book has an interdisciplinary appeal: scholars from computational sub-fields in computer science, mathematics, physics, or engineering will find it of particular interest

    XSEDE: eXtreme Science and Engineering Discovery Environment Third Quarter 2012 Report

    Get PDF
    The Extreme Science and Engineering Discovery Environment (XSEDE) is the most advanced, powerful, and robust collection of integrated digital resources and services in the world. It is an integrated cyberinfrastructure ecosystem with singular interfaces for allocations, support, and other key services that researchers can use to interactively share computing resources, data, and expertise.This a report of project activities and highlights from the third quarter of 2012.National Science Foundation, OCI-105357
    corecore