5 research outputs found

    A remote memory access infrastructure for global address space programming models in fpgas,”

    Get PDF
    ABSTRACT We are proposing a shared-memory communication infrastructure that provides a common parallel programming interface for FPGA and CPU components in a heterogeneous system. Our intent is to ease the integration of reconfigurable hardware into parallel programming models like Partitioned Global Address Space (PGAS). For this purpose, we introduce a remote memory access component based on Active Messages that implements the core API of the Berkeley GASNet communication library, and a simple controller that manages communication and synchronization for custom FPGA cores. We demonstrate how these components deliver a simple and easily configurable communication mechanism between distributed memories in a multi-FPGA system with processors as well as custom hardware nodes. Categories and Subject Descriptors MOTIVATION High-Performance Reconfigurable Computing (HPRC) systems present two main challenges to application programmers: What parallel programming model to use, and how to incorporate reconfigurable hardware into a software application. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. The first problem, inherent to all distributed computing, is what model of the existing hardware and memory distribution to present to the application programmer. This has implications for how to distribute and communicate data across the system, how to synchronize computations, and for how explicitly the programmer has to consider the physical makeup of the system. On the one end, Shared Memory presents a unified address space to the programmer, similar to the one found on a single host. On the other end, Distributed Memory only lets the programmer access the local memory, and all data exchange with other nodes happens explicitly through communication called Message Passing. The shared memory model is easy to program, but often leads to inefficient code, since the compiler cannot sufficiently reason about data access and communication patterns. The distributed model can produce very efficient implementations, but is very cumbersome to program. The second problem involves the fact that most highperformance application programmers understand software and CPU-based systems, but not reconfigurable hardware. Part of that problem is being attacked by emerging tools to translate high-level language CPU code into RegisterTransfer Language, with mixed results so far. However, besides an automatic synthesis path, applications also require an infrastructure for communication between software and hardware computation nodes, the equivalent of a communication API between CPU hosts. Preferably, this infrastructure should be independent from specific FPGA platforms, given the multitude of concepts and products that connect FPGAs with CPU-based host systems. Both problems presented above point to the larger issue of increasing software and hardware complexity. Performance and efficiency are still the most common metrics for computing systems, but productivity, as measured by the required effort to design, debug and maintain high-performance computing applications, has been recognized as essential to continued progress towards exascale systems In our opinion, a unified programming model and API for all components in a heterogeneous system (see In this paper, we will present our vision of a C++-based application design process that is based on the Partitioned Global Address Space model (PGAS). As our main contribution, we introduce an FPGA communication infrastructure compatible to GASNet[12], an existing PGAS communica

    Constructing cluster of simple FPGA boards for cryptologic computations

    Get PDF
    In this thesis, we propose an FPGA cluster infrastructure which can be utilized in implementing cryptanalytic attacks and accelerating cryptographic operations. The cluster can be formed using simple and inexpensive, off-the-shelf FPGA boards featuring an FPGA device, local storage, CPLD, and network connection. Forming the cluster is simple and no effort for the hardware development is needed except for the hardware design for the actual computation. Using a softcore processor on FPGA, we are able to configure FPGA devices dynamically and change their configuration on the fly from a remote computer. The softcore on FPGA can execute relatively complicated programs for mundane tasks unworthy of FPGA resources. Finally, we propose and implement a fast and efficient dynamic configuration switch technique that is shown to be useful especially in cryptanalytic applications. Our infrastructure provides a cost-effective alternative for formerly proposed cryptanalytic engines based on FPGA devices

    Safe and scalable parallel programming with session types

    Get PDF
    Parallel programming is a technique that can coordinate and utilise multiple hardware resources simultaneously, to improve the overall computation performance. However, reasoning about the communication interactions between the resources is difficult. Moreover, scaling an application often leads to increased number and complexity of interactions, hence we need a systematic way to ensure the correctness of the communication aspects of parallel programs. In this thesis, we take an interaction-centric view of parallel programming, and investigate applying and adapting the theory of Session Types, a formal typing discipline for structured interaction-based communication, to guarantee the lack of communication mismatches and deadlocks in concurrent systems. We focus on scalable, distributed parallel systems that use message-passing for communication. We explore programming language primitives, tools and frameworks to simplify parallel programming. First, we present the design and implementation of Session C, a program ming toolchain for message-passing parallel programming. Session C can ensure deadlock freedom, communication safety and global progress through static type checking, and supports optimisations by refinements through session subtyping. Then we introduce Pabble, a protocol description language for designing parametric interaction protocols. The language can capture scalable interaction patterns found in parallel applications, and guarantees communication-safety and deadlock-freedom despite the undecidability of the underlying parameterised session type theory. Next, we demonstrate an application of Pabble in a workflow that combines Pabble protocols and computation kernel code describing the sequential computation behaviours, to generate a Message-Passing Interface (MPI) parallel application. The framework guarantees, by construction, that generated code are free from communication errors and deadlocks. Finally, we formalise an extension of binary session types and new language primitives for safe and efficient implementations of multiparty parallel applications in a binary server-client programming environment. Our exploration with session-based parallel programming shows that it is a feasible and practical approach to guaranteeing communication aspects of complex, interaction-based scalable parallel programming.Open Acces
    corecore