21 research outputs found

    Sprachen fĂĽr parallele objektorientierte Programmierung

    Get PDF
    In letzter Zeit wurden eine ganze Reihe von objektorientierten Sprachen zur parallelen Programmierung entworfen und implementiert. Einige dieser Sprachen werden hier einander gegenübergestellt. Das Hauptaugenmerk der Arbeit liegt einerseits auf den bereitgestellten Konzepten zur Bewältigung der Komplexität, die sich durch die Parallelisierung ergibt, und andererseits auf der Flexibilisierung von Synchronisation und Kommunikation zur Optimierung der Parallelisierbarkeit von Programmausführungen

    A structured design technique for distributed programs

    Get PDF
    This report contains a non-formal motivation and description of ADL-d, a graphical design technique for parallel and distributed software. ADL-d allows a developer to construct an application in terms of communicating processes. The technique distinguishes itself from others by its use of highly orthogonal concepts, and support for automated code generation. Without being committed to one particular design method, ADL-d as a technique can be used from the early phases of application design through phases that concentrate on algorithmic design, and final implementation on some target platform. In this report, we discuss and motivate all ADL-d components, including recently incorporated features such as support for connection-oriented communication, support for modeling dynamically changing communication structures, and a formal semantical basis for each ADL-d component. Also, we discuss our ADL-d implementation, and place ADL-d in context by discussing some related work

    Programming with Exceptions in JCilk

    Get PDF
    JCilk extends the Java language to provide call-return semantics for multithreading, much as Cilk does for C. Java's built-in thread model does not support the passing of exceptions or return values from one thread back to the "parent" thread that created it. JCilk imports Cilk's fork-join primitives spawn and sync into Java to provide procedure-call semantics for concurrent subcomputations. This paper shows how JCilk integrates exception handling with multithreading by defining semantics consistent with the existing semantics of Java's try and catch constructs, but which handle concurrency in spawned methods. JCilk's strategy of integrating multithreading with Java's exception semantics yields some surprising semantic synergies. In particular, JCilk extends Java's exception semantics to allow exceptions to be passed from a spawned method to its parent in a natural way that obviates the need for Cilk's inlet and abort constructs. This extension is "faithful" in that it obeys Java's ordinary serial semantics when executed on a single processor. When executed in parallel, however, an exception thrown by a JCilk computation signals its sibling computations to abort, which yields a clean semantics in which only a single exception from the enclosing try block is handled. The decision to implicitly abort side computations opens a Pandora's box of subsidiary linguistic problems to be resolved, however. For instance, aborting might cause a computation to be interrupted asynchronously, causing havoc in programmer understanding of code behavior. To minimize the complexity of reasoning about aborts, JCilk signals them "semisynchronously" so that abort signals do not interrupt ordinary serial code. In addition, JCilk propagates an abort signal throughout a subcomputation naturally with a built-in CilkAbort exception, thereby allowing programmers to handle clean-up by simply catching the CilkAbort exception. The semantics of JCilk allow programs with speculative computations to be programmed easily. Speculation is essential for parallelizing programs such as branch-and-bound or heuristic search. We show how JCilk's linguistic mechanisms can be used to program a solution to the "queens" problem and an implemention of a parallel alpha-beta search.Singapore-MIT Alliance (SMA

    Performances of the PS<sup>2</sup> parallel storage and processing system for tomographic image visualization

    Get PDF
    We propose a new approach for developing parallel I/O- and compute-intensive applications. At a high level of abstraction, a macro data flow description describes how processing and disk access operations are combined. This high-level description (CAP) is precompiled into compilable and executable C++ source language. Parallel file system components specified by CAP are offered as reusable CAP operations. Low-level parallel file system components can, thanks to the CAP formalism, be combined with processing operations in order to yield efficient pipelined parallel I/O and compute intensive programs. The underlying parallel system is based on commodity components (PentiumPro processors, Fast Ethernet) and runs on top of WindowsNT. The CAP-based parallel program development approach is applied to the development of an I/O and processing intensive tomographic 3D image visualization application. Configurations range from a single PentiumPro I-disk system to a four PentiumPro 27-disk system. We show that performances scale well when increasing the number of processors and disks. With the largest configuration, the system is able to extract in parallel and project into the display space between three and four 512&times;512 images per second. The images may have any orientation and are extracted from a 100 MByte 3D tomographic image striped over the available set of disk

    Synthesizing parallel imaging applications using the CAP Computer-Aided Parallelization tool

    Get PDF
    Imaging applications such as filtering, image transforms and compression/decompression require vast amounts of computing power when applied to large data sets. These applications would potentially benefit from the use of parallel processing. However, dedicated parallel computers are expensive and their processing power per node lags behind that of the most recent commodity components. Furthermore, developing parallel applications remains a difficult task. In order to facilitate the development of parallel applications, we propose the CAP computer aided parallelization tool which enables application programmers to specify at a high level of abstraction the flow of data between pipelined parallel operations. In addition, the CAP tool supports the programmer in developing parallel imaging and storage operations. CAP enables combining efficiently parallel storage access routines and image processing sequential operations. The paper shows how processing and I/O intensive imaging applications must be implemented to take advantage of parallelism and pipelining between data access and processing. The paper's contribution is: (1) to show how such implementations can be compactly specified in CAP; and (2) to demonstrate that CAP specified applications achieve the performance of custom parallel code. The paper analyzes theoretically the performance of CAP specified applications and demonstrates the accuracy of the theoretical analysis through experimental measurement

    DPS - Dynamic Parallel Schedules

    Get PDF
    Dynamic Parallel Schedules (DPS) is a high-level framework for developing parallel applications on distributed memory computers (e.g. clusters of PC). Its model relies on compositional customizable split-compute-merge graphs of operations (directed acyclic flow graphs). The graphs and the mapping of operations to processing nodes are specified dynamically at runtime. DPS applications are pipelined and multithreaded by construction, ensuring a maximal overlap of computations and communications. DPS applications can call parallel services exposed by other DPS applications, enabling the creation of reusable parallel components. The DPS framework relies on a C++ class library. Thanks to its dynamic nature, DPS offers new perspectives for the creation and deployment of parallel applications running on server cluster

    Computer-aided synthesis of parallel image processing applications

    Get PDF
    We present a tutorial description of the CAP computer-aided parallelization tool. CAP has been designed with the goal of letting the parallel application programmer have complete control of how his application is parallelized, and at the same time freeing him from the burden of managing explicitly a large number of threads and associated synchronization and communication primitives. The CAP tool, a precompiler generating C++ source code, enables application programmers to specify at a high level of abstraction the set of threads present in the application, the processing operations offered by these threads, and the parallel constructs specifying the flow of data and parameters between operations. A configuration map specifies the mapping between CAP threads and operating system processes, possibly located on different computers. The generated program may run on various parallel configurations without recompilation. We discuss the issues of flow control and load balancing and show the solutions offered by CAP. We also show how CAP can be used to generate relatively complex parallel programs incorporating neighborhood dependent operations. Finally, we briefly describe a real 3D image processing application: the Visible Human Slice Server, its implementation according to the previously defined concepts and its performanc

    Introduction to the Literature on Programming Language Design

    Get PDF
    This is an introduction to the literature on programming language design and related topics. It is intended to cite the most important work, and to provide a place for students to start a literature search

    Introduction to the Literature On Programming Language Design

    Get PDF
    This is an introduction to the literature on programming language design and related topics. It is intended to cite the most important work, and to provide a place for students to start a literature search
    corecore