187,661 research outputs found

    Datalog as a parallel general purpose programming language

    Get PDF
    The increasing available parallelism of computers demands new programming languages that make parallel programming dramatically easier and less error prone. It is proposed that datalog with negation and timestamps is a suitable basis for a general purpose programming language for sequential, parallel and distributed computers. This paper develops a fully incremental bottom-up interpreter for datalog that supports a wide range of execution strategies, with trade-offs affecting efficiency, parallelism and control of resource usage. Examples show how the language can accept real-time external inputs and outputs, and mimic assignment, all without departing from its pure logical semantics

    A semantics and implementation of a causal logic programming language

    Get PDF
    The increasingly widespread availability of multicore and manycore computers demands new programming languages that make parallel programming dramatically easier and less error prone. This paper describes a semantics for a new class of declarative programming languages that support massive amounts of implicit parallelism

    Teaching Parallel Programming Using Java

    Full text link
    This paper presents an overview of the "Applied Parallel Computing" course taught to final year Software Engineering undergraduate students in Spring 2014 at NUST, Pakistan. The main objective of the course was to introduce practical parallel programming tools and techniques for shared and distributed memory concurrent systems. A unique aspect of the course was that Java was used as the principle programming language. The course was divided into three sections. The first section covered parallel programming techniques for shared memory systems that include multicore and Symmetric Multi-Processor (SMP) systems. In this section, Java threads was taught as a viable programming API for such systems. The second section was dedicated to parallel programming tools meant for distributed memory systems including clusters and network of computers. We used MPJ Express-a Java MPI library-for conducting programming assignments and lab work for this section. The third and the final section covered advanced topics including the MapReduce programming model using Hadoop and the General Purpose Computing on Graphics Processing Units (GPGPU).Comment: 8 Pages, 6 figures, MPJ Express, MPI Java, Teaching Parallel Programmin

    Programming Parallel Computers

    Get PDF
    This paper is from a keynote address to the IEEE International Conference on Computer Languages, October 9, 1988. Keynote addresses are expected to be provocative (and perhaps even entertaining), but not necessarily scholarly. The reader should be warned that this talk was prepared with these expectations in mind.Parallel computers offer the potential of great speed at low cost. The promise of parallelism is limited by the ability to program parallel machines effectively. This paper explores the opportunities and the problems of parallel computing. Technological and economic trends are studied with a view towards determining where the field of parallel computing is going. An approach to parallel programming, called UNITY, is described. UNITY was developed by Jay Misra and myself, and is described in [Chandy]. Extensions to UNITY are discussed; these extensions were motivated by discussions with Chuck Seit

    Parallel Computation in Econometrics: A Simplified Approach

    Get PDF
    Parallel computation has a long history in econometric computing, but is not at all wide spread. We believe that a major impediment is the labour cost of coding for parallel architectures. Moreover, programs for specific hardware often become obsolete quite quickly. Our approach is to take a popular matrix programming language (Ox), and implement a message-passing interface using MPI. Next, object-oriented programming allows us to hide the specific parallelization code, so that a program does not need to be rewritten when it is ported from the desktop to a distributed network of computers. Our focus is on so-called embarrassingly parallel computations, and we address the issue of parallel random number generation.Code optimization; Econometrics; High-performance computing; Matrix-programming language; Monte Carlo; MPI; Ox; Parallel computing; Random number generation.

    CS 499/699: Introduction to Parallel Programming

    Get PDF
    Low-cost parallel computers such as PC clusters are becoming available, and many computationally intensive problems can be solved using such computers. It is, however, still not easy to design and implement a software that run fast using multiple processors. This course covers basic software design methods and experiencing programming parallel programming using MPI. After taking this course students will be able to design parallel algorithms, evaluate the speed of the execution, and write MPI codes

    Lattice QCD on a Beowulf Cluster

    Get PDF
    Using commodity component personal computers based on Alpha processor and commodity network devices and a switch, we built an 8-node parallel computer. GNU/Linux is chosen as an operating system and message passing libraries such as PVM, LAM, and MPICH have been tested as a parallel programming environment. We discuss our lattice QCD project for a heavy quark system on this computer.Comment: Lattice99 (algorithms and machines),3 pages, 3 figures, espcrc2.st

    Learning from the Success of MPI

    Full text link
    The Message Passing Interface (MPI) has been extremely successful as a portable way to program high-performance parallel computers. This success has occurred in spite of the view of many that message passing is difficult and that other approaches, including automatic parallelization and directive-based parallelism, are easier to use. This paper argues that MPI has succeeded because it addresses all of the important issues in providing a parallel programming model.Comment: 12 pages, 1 figur

    User-Friendly Parallel Computations with Econometric Examples

    Get PDF
    This paper shows how a high level matrix programming language may be used to perform Monte Carlo simulation, bootstrapping, estimation by maximum likelihood and GMM, and kernel regression in parallel on symmetric multiprocessor computers or clusters of workstations. The implementation of parallelization is done in a way such that an investigator may use the programs without any knowledge of parallel programming. A bootable CD that allows rapid creation of a cluster for parallel computing is introduced. Examples show that parallelization can lead to important reductions in computational time. Detailed discussion of how the Monte Carlo problem was parallelized is included as an example for learning to write parallel programs for Octave.parallel computing, Monte Carlo, bootstrapping,maximum likelihood, GMM, kernel regression
    • ā€¦
    corecore