8 research outputs found

    Modula-2* and its compilation

    Get PDF

    MODULA-2* and its compilation

    Get PDF
    Disponible dans les fichiers attachés à ce documen

    A Survey on Parallel Architecture and Parallel Programming Languages and Tools

    Get PDF
    In this paper, we have presented a brief review on the evolution of parallel computing to multi - core architecture. The survey briefs more than 45 languages, libraries and tools used till date to increase performance through parallel programming. We ha ve given emphasis more on the architecture of parallel system in the survey

    Flexible language constructs for large parallel programs

    Get PDF
    The goal of the research described is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (MIMD) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include SIMD (Single Instruction Multiple Data), SPMD (Single Program Multiple Data), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression of the variety of algorithms that occur in large scientific computations. An overview of a new language that combines many of these programming models in a clean manner is given. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. An overview of the language and discussion of some of the critical implementation details is given

    Data flow analysis of parallel programs

    Get PDF
    Data flow analysis is the prerequisite of performing optimizations such as common subexpression eliminations or code motion of partial redundant expressions on imperative sequential programs. To apply these transformations to parallel imperative programs, the notion of data flow must be extended to concurrent programs.The additional source language features are: common address space (shared memory), nested parallel statements (PAR),or-parallelism, critical regions and message passing. The underlying interleaving semantics of the concurrently-executed processes result in the so-called state space explosion which on first appearance prevents the computation of the meet over all path solution needed for data flow analysis. For the class of one-bit data flow problems (also known as bit-vector problems) we can show that for the computation of the meet over all path solution, not all interleavings are needed. Based on that, we can give simple data flow equations representing the data flow effects of the PAR statement.The definition of a parallel control flow graph leads to an efficient extension of Killdal\u27s algorithm to compute the data flow of a concurrent program.The time complexity is the same as for analyzing a ``comparable\u27\u27 sequential program

    Entwurf und Evaluierung mehrfädig superskalarer Prozessortechniken im Hinblick auf Multimedia [online]

    Get PDF

    The art of active memory

    Get PDF
    corecore