2,393 research outputs found

    Symbolic crosschecking of data-parallel floating-point code

    Get PDF

    Towards co-designed optimizations in parallel frameworks: A MapReduce case study

    Full text link
    The explosion of Big Data was followed by the proliferation of numerous complex parallel software stacks whose aim is to tackle the challenges of data deluge. A drawback of a such multi-layered hierarchical deployment is the inability to maintain and delegate vital semantic information between layers in the stack. Software abstractions increase the semantic distance between an application and its generated code. However, parallel software frameworks contain inherent semantic information that general purpose compilers are not designed to exploit. This paper presents a case study demonstrating how the specific semantic information of the MapReduce paradigm can be exploited on multicore architectures. MR4J has been implemented in Java and evaluated against hand-optimized C and C++ equivalents. The initial observed results led to the design of a semantically aware optimizer that runs automatically without requiring modification to application code. The optimizer is able to speedup the execution time of MR4J by up to 2.0x. The introduced optimization not only improves the performance of the generated code, during the map phase, but also reduces the pressure on the garbage collector. This demonstrates how semantic information can be harnessed without sacrificing sound software engineering practices when using parallel software frameworks.Comment: 8 page

    Using the High Productivity Language Chapel to Target GPGPU Architectures

    Get PDF
    It has been widely shown that GPGPU architectures offer large performance gains compared to their traditional CPU counterparts for many applications. The downside to these architectures is that the current programming models present numerous challenges to the programmer: lower-level languages, explicit data movement, loss of portability, and challenges in performance optimization. In this paper, we present novel methods and compiler transformations that increase productivity by enabling users to easily program GPGPU architectures using the high productivity programming language Chapel. Rather than resorting to different parallel libraries or annotations for a given parallel platform, we leverage a language that has been designed from first principles to address the challenge of programming for parallelism and locality. This also has the advantage of being portable across distinct classes of parallel architectures, including desktop multicores, distributed memory clusters, large-scale shared memory, and now CPU-GPU hybrids. We present experimental results from the Parboil benchmark suite which demonstrate that codes written in Chapel achieve performance comparable to the original versions implemented in CUDA.NSF CCF 0702260Cray Inc. Cray-SRA-2010-016962010-2011 Nvidia Research Fellowshipunpublishednot peer reviewe

    The Honeycomb Architecture: Prototype Analysis and Design

    Get PDF
    Due to the inherent potential of parallel processing, a lot of attention has focused on massively parallel computer architecture. To a large extent, the performance of a massively parallel architecture is a function of the flexibility of its communication network. The ability to configure the topology of the machine determines the ease with which problems are mapped onto the architecture. If the machine is sufficiently flexible, the architecture can be configured to match the natural structure of a wide range of problems. There are essentially four unique types of massively parallel architectures: 1. Cellular Arrays 2. Lattice Architectures [21, 30] 3. Connection Architectures [19] 4. Honeycomb Architectures [24] All four architectures are classified as SIMD. Each, however, offers a slightly different solution to the mapping problem. The first three approaches are characterized by easily distinguishable processor, communication, and memory components. In contrast, the Honeycomb architecture contains multipurpose processing/communication/memory cells. Each cell can function as either a simple CPU, a memory cell, or an element of a communication bus. The conventional approach to massive parallelism is the cellular array. It typically consists of an array of processing elements arranged in a mesh pattern with hard wired connections between neighboring processors. Due to their fixed topology, cellular arrays impose severe limitations upon interprocessor communication. The lattice architecture is a somewhat more flexible approach to massive parallelism. It consists of a lattice of processing elements embedded in an array of simple switching elements. The switching elements form a programmable interconnection network. A lattice architecture can be configured in a number of different topologies, but it is still only a partial solution to the mapping problem. The connection architecture offers a comprehensive solution to the mapping problem. It consists of a cellular array integrated into a packet-switched communication network. The network provides transparent communication between all processing elements. Note that the communication network is physically abstracted from the processor array, allowing the processors to evolve independently of the network. The Honeycomb architecture offers a unique solution to the mapping problem. It consists of an array of identical processing/communication/memory cells. Each cell can function as either a processor cell, a communication cell, or a memory cell. Collections of Honeycomb cells can be grouped into multicell CPUs, multi-cell memories, or multi-cell CPU-memory systems. Multi-cell CPU-memory systems are hereafter referred to as processing clusters. The topology of the Honeycomb is determined at compilation time. During a preprocessing phase, the Honeycomb is adjusted to the desired topology. The Honeycomb cell is extremely simple, capable of only simple arithmetic and logic operations. The simplicity of the Honeycomb cell is the key to the Honeycomb concept. As indicated in [24], there are two main research avenues to pursue in furthering the Honeycomb concept: 1. Analyzing the design of a uniform Honeycomb cell 2. Mapping algorithms onto the Honeycomb architecture This technical report concentrates on the first issue. While alluded to throughout the report, the second issue is not addressed in any detail

    Hardware Barrier Synchronization: Static Barrier MIMD (SBM)

    Get PDF
    In this paper, we give the design, and performance analysis, of a new, highly efficient, synchronization mechanism called “Static Barrier MIMD” or “SBM.” Unlike traditional barrier synchronization, the proposed barriers are designed to facilitate the use of static (compile-time) code scheduling for eliminating some synchronizations. For this reason, our barrier hardware is more general than most hardware barrier mechanisms, allowing any subset of the processors to participate in each barrier. Since code scheduling typically operates on fine-grain parallelism, it is also vital that barriers be able to execute in a small number of clock ticks. The SBM is actually only one of two new classes of barrier machines proposed to facilitate static code scheduling; the other architecture is the “Dynamic Barrier MIMD,” or “DBM,” which is described in a companion paper1. The DBM differs from the SBM in that the DBM employs more complex hardware to make the system less dependent on the precision of the static analysis and code scheduling; for example, an SBM cannot efficiently manage simultaneous execution of independent parallel programs, whereas a DBM can

    Speculative dynamic vectorization

    Get PDF
    Traditional vector architectures have shown to be very effective for regular codes where the compiler can detect data-level parallelism. However, this SIMD parallelism is also present in irregular or pointer-rich codes, for which the compiler is quite limited to discover it. In this paper we propose a microarchitecture extension in order to exploit SIMD parallelism in a speculative way. The idea is to predict when certain operations are likely to be vectorizable, based on some previous history information. In this case, these scalar instructions are executed in a vector mode. These vector instructions operate on several elements (vector operands) that are anticipated to be their input operands and produce a number of outputs that are stored on a vector register in order to be used by further instructions. Verification of the correctness of the applied vectorization eventually changes the status of a given vector element from speculative to non-speculative, or alternatively, generates a recovery action in case of misspeculation. The proposed microarchitecture extension applied to a 4-way issue superscalar processor with one wide bus is 19% faster than the,same processor with 4 scalar buses to Ll data cache. This speed up is due basically to 1) the reduction in number of memory accesses, 15% for SpecInt and 20% for SpecFP, 2) the transformation of scalar arithmetic instructions into their vector counterpart, 28% for SpecInt and 23% for SpecFP, and 3) the exploitation of control independence for mispredicted branches.Peer ReviewedPostprint (published version

    Communications for Next Generation single chip computers

    Get PDF
    It is the thesis of this report that much of what is presently thought to require specialized VLSI functions might instead be achieved by combinations of fast general purpose single chip computers with upgraded communication facilities. To this end, the characteristics of applications of this nature are first surveyed briefly and some working principles established. In the light of these, three different chip philosophies are explored in some detail. This study shows that some upgrading of typical single chip I/O will definitely be necessary, but that this upgrading does not have to be complex and that true multiprocessor-multibus operation could be achieved without excessive cost

    Memory performance of and-parallel prolog on shared-memory architectures

    Get PDF
    The goal of the RAP-WAM AND-parallel Prolog abstract architecture is to provide inference speeds significantly beyond those of sequential systems, while supporting Prolog semantics and preserving sequential performance and storage efficiency. This paper presents simulation results supporting these claims with special emphasis on memory performance on a two-level sharedmemory multiprocessor organization. Several solutions to the cache coherency problem are analyzed. It is shown that RAP-WAM offers good locality and storage efficiency and that it can effectively take advantage of broadcast caches. It is argued that speeds in excess of 2 ML IPS on real applications exhibiting medium parallelism can be attained with current technology

    Guppy: Process-Oriented Programming on Embedded Devices

    Get PDF
    Guppy is a new and experimental process-oriented programming language, taking much inspiration (and some code-base) from the existing occam-pi language. This paper reports on a variety of aspects related to this, specifically language, compiler and run-time system development, enabling Guppy programs to run on desktop and embedded systems. A native code-generation approach is taken, using C as the intermediate language, and with stack-space requirements determined at compile-time
    • …
    corecore