26 research outputs found

    DynaProg for Scala

    Get PDF
    Dynamic programming is an algorithmic technique to solve problems that follow the Bellman’s principle: optimal solutions depends on optimal sub-problem solutions. The core idea behind dynamic programming is to memoize intermediate results into matrices to avoid multiple computations. Solving a dynamic programming problem consists of two phases: filling one or more matrices with intermediate solutions for sub-problems and recomposing how the final result was constructed (backtracking). In textbooks, problems are usually described in terms of recurrence relations between matrices elements. Expressing dynamic programming problems in terms of recursive formulae involving matrix indices might be difficult, if often error prone, and the notation does not capture the essence of the underlying problem (for example aligning two sequences). Moreover, writing correct and efficient parallel implementation requires different competencies and often a significant amount of time. In this project, we present DynaProg, a language embedded in Scala (DSL) to address dynamic programming problems on heterogeneous platforms. DynaProg allows the programmer to write concise programs based on ADP [1], using a pair of parsing grammar and algebra; these program can then be executed either on CPU or on GPU. We evaluate the performance of our implementation against existing work and our own hand-optimized baseline implementations for both the CPU and GPU versions. Experimental results show that plain Scala has a large overhead and is recommended to be used with small sequences (≤1024) whereas the generated GPU version is comparable with existing implementations: matrix chain multiplication has the same performance as our hand-optimized version (142% of the execution time of [2]) for a sequence of 4096 matrices, Smith-Waterman is twice slower than [3] on a pair of sequences of 6144 elements, and RNA folding is on par with [4] (95% running time) for sequences of 4096 elements. [1] Robert Giegerich and Carsten Meyer. Algebraic Dynamic Programming. [2] Chao-Chin Wu, Jenn-Yang Ke, Heshan Lin and Wu Chun Feng. Optimizing dynamic programming on graphics processing units via adaptive thread-level parallelism. [3] Edans Flavius de O. Sandes, Alba Cristina M. A. de Melo. Smith-Waterman alignment of huge sequences with GPU in linear space. [4] Guillaume Rizk and Dominique Lavenier. GPU accelerated RNA folding algorithm

    Irregular Computations in Fortran – Expression and Implementation Strategies

    Get PDF

    System Support for Implicitly Parallel Programming

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems Laborator

    Case Studies on Optimizing Algorithms for GPU Architectures

    Get PDF
    Modern GPUs are complex, massively multi-threaded, and high-performance. Programmers naturally gravitate towards taking advantage of this high performance for achieving faster results. However, in order to do so successfully, programmers must first understand and then master a new set of skills – writing parallel code, using different types of parallelism, adapting to GPU architectural features, and understanding issues that limit performance. In order to ease this learning process and help GPU programmers become productive more quickly, this dissertation introduces three data access skeletons (DASks) – Block, Column, and Row -- and two block access skeletons (BASks) – Block-By-Block and Warp-by-Warp. Each “skeleton” provides a high-performance implementation framework that partitions data arrays into data blocks and then iterates over those blocks. The programmer must still write “body” methods on individual data blocks to solve their specific problem. These skeletons provide efficient machine dependent data access patterns for use on GPUs. DASks group n data elements into m fixed size data blocks. These m data block are then partitioned across p thread blocks using a 1D or 2D layout pattern. The fixed-size data blocks are parameterized using three C++ template parameters – nWork, WarpSize, and nWarps. Generic programming techniques use these three parameters to enable performance experiments on three different types of parallelism – instruction-level parallelism (ILP), data-level parallelism (DLP), and thread-level parallelism (TLP). These different DASks and BASks are introduced using a simple memory I/O (Copy) case study. A nearest neighbor search case study resulted in the development of DASKs and BASks but does not use these skeletons itself. Three additional case studies – Reduce/Scan, Histogram, and Radix Sort -- demonstrate DASks and BASks in action on parallel primitives and also provides more valuable performance lessons.Doctor of Philosoph

    Metaphor in Education

    No full text
    Without metaphor there would be no legs on the table, no hands on the clock. These are dead metaphors. Even that expression is a metaphor, for how can something be dead that has never literally been born. It is an expression which cannot be taken literally. In its first use it was 'alive' in the sense of being new or witty or apt and memorable. Without metaphor we are reduced to the bare bones of language, to a kind of Orwellian Newspeak. One can hardly avoid using metaphors to explain them. Even scientists and mathematicians use metaphors but they usually refer to them as models. Metaphor is a function of language which enables us to be creative. Not only the person who coins, invents, or thinks of the new metaphor but also the listener or reader who constructs a personal meaning for him or her self. We speak of creativity in education, as a human capacity to be encouraged and developed. How creative can humans be? Do they ever really 'create' anything new apart from reproductions of themselves? Any creative activity such as painting, building or gardening is really re-organising elements already created. So humans enjoy 'creating' their own order, forms, or patterns which we call art. Language is capable of endless patterns. The basic patterns, usually known as grammar, appear to be innate and in speech and writing we use these 'inbuilt' structures to create new sentences of our own. At its highest level we call this literature. It has taken us some time to realise that a word in itself has no meaning as it is a symbol only. For those aspects of experience which are difficult to explain we turn to metaphor. Thus religions often use myths and symbols. Anthropology describes many human activities as metaphoric, for example myths or totemism. Practically every sphere of human activity is imbued with this magical quality of metaphor, for it extends our understanding of the world by giving us a kind of 'elastic' way of describing our experiences. It is not the prerogative of writers or poets but a power we all possess and one which has been derided and abused at times in our history. Only now is it increasingly being recognized as a human capacity worthy of study. In this work I delve into some aspects of the use of metaphor to show how we need to be aware of its potent, pervasive power, especially those of us involved in teaching for whom I will attempt to demonstrate that teaching is itself a metaphoric activity

    Pertanika Journal of Science & Technology

    Get PDF

    Pertanika Journal of Science & Technology

    Get PDF
    corecore