24 research outputs found

    Nové knihy

    Get PDF

    Survey of new vector computers: The CRAY 1S from CRAY research; the CYBER 205 from CDC and the parallel computer from ICL - architecture and programming

    Get PDF
    Problems which can arise with vector and parallel computers are discussed in a user oriented context. Emphasis is placed on the algorithms used and the programming techniques adopted. Three recently developed supercomputers are examined and typical application examples are given in CRAY FORTRAN, CYBER 205 FORTRAN and DAP (distributed array processor) FORTRAN. The systems performance is compared. The addition of parts of two N x N arrays is considered. The influence of the architecture on the algorithms and programming language is demonstrated. Numerical analysis of magnetohydrodynamic differential equations by an explicit difference method is illustrated, showing very good results for all three systems. The prognosis for supercomputer development is assessed

    PRIMA : a DBMS prototype supporting engineering applications

    Get PDF
    The design of the Molecule-Atom Data model, aimed at the effective support of engineering applications, is justified and described with its essential properties and features. MAD offers direct and symmetric management of network structures and recursiveness, dynamic object definition and object handling allowing for both vertical and horizontal access. Its prototype implementation PRIMA is discussed using a multi-level model for the DBMS architecture. Our DBMS kernel provides a variety of access path structures, tuning mechanisms, and performance enhancements transparent at the data model interface. PRIMA is assumed to be used in different run-time environments including workstation coupling and multi-processor systems. In particular, it serves as a research vehicle to investigate the exploitation of "semantic parallelism" in single user operations

    Towards effective support of engineering information systems

    Get PDF
    Existing interfaces to nowadays database systems show to be increasingly unsuitable for the evolving wide spectrum of engineering applications (e.g. CAD/CAM, geographic information management, knowledge-based systems for planning and design, etc.). This is further intensified due to the workstation-oriented processing scheme prevailing in the engineering area. Starting with an architectural approach tailored to this distributed processing concept, we propose the PRIMA-NDBS and its most important interfaces, i.e. the data model and the application/user interface offering effective support of engineering information systems

    Prediction based task scheduling in distributed computing

    Full text link

    Granularity in Large-Scale Parallel Functional Programming

    Get PDF
    This thesis demonstrates how to reduce the runtime of large non-strict functional programs using parallel evaluation. The parallelisation of several programs shows the importance of granularity, i.e. the computation costs of program expressions. The aspect of granularity is studied both on a practical level, by presenting and measuring runtime granularity improvement mechanisms, and at a more formal level, by devising a static granularity analysis. By parallelising several large functional programs this thesis demonstrates for the first time the advantages of combining lazy and parallel evaluation on a large scale: laziness aids modularity, while parallelism reduces runtime. One of the parallel programs is the Lolita system which, with more than 47,000 lines of code, is the largest existing parallel non-strict functional program. A new mechanism for parallel programming, evaluation strategies, to which this thesis contributes, is shown to be useful in this parallelisation. Evaluation strategies simplify parallel programming by separating algorithmic code from code specifying dynamic behaviour. For large programs the abstraction provided by functions is maintained by using a data-oriented style of parallelism, which defines parallelism over intermediate data structures rather than inside the functions. A highly parameterised simulator, GRANSIM, has been constructed collaboratively and is discussed in detail in this thesis. GRANSIM is a tool for architecture-independent parallelisation and a testbed for implementing runtime-system features of the parallel graph reduction model. By providing an idealised as well as an accurate model of the underlying parallel machine, GRANSIM has proven to be an essential part of an integrated parallel software engineering environment. Several parallel runtime- system features, such as granularity improvement mechanisms, have been tested via GRANSIM. It is publicly available and in active use at several universities worldwide. In order to provide granularity information this thesis presents an inference-based static granularity analysis. This analysis combines two existing analyses, one for cost and one for size information. It determines an upper bound for the computation costs of evaluating an expression in a simple strict higher-order language. By exposing recurrences during cost reconstruction and using a library of recurrences and their closed forms, it is possible to infer the costs for some recursive functions. The possible performance improvements are assessed by measuring the parallel performance of a hand-analysed and annotated program
    corecore