14 research outputs found

    A bibliography on parallel and vector numerical algorithms

    Get PDF
    This is a bibliography of numerical methods. It also includes a number of other references on machine architecture, programming language, and other topics of interest to scientific computing. Certain conference proceedings and anthologies which have been published in book form are listed also

    A pipelined code mapping scheme for static data flow computers

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1986.MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERINGBibliography: leaves 245-252.by Gao Guang Rong.Ph.D

    Anisotropy and heterogeneity in finite deformation: resolving versus upscaling

    Get PDF

    Decomposition of unstructured meshes for efficient parallel computation

    Get PDF

    Proceedings: Computer Science and Data Systems Technical Symposium, volume 1

    Get PDF
    Progress reports and technical updates of programs being performed by NASA centers are covered. Presentations in viewgraph form are included for topics in three categories: computer science, data systems and space station applications

    Structural optimization of numerical programs for high-level synthesis

    No full text
    This thesis introduces a new technique, and its associated tool SOAP, to automatically perform source-to-source optimization of numerical programs, specifically targeting the trade-off among numerical accuracy, latency, and resource usage as a high-level synthesis flow for FPGA implementations. A new intermediate representation, MIR, is introduced to carry out the abstraction and optimization of numerical programs. Equivalent structures in MIRs are efficiently discovered using methods based on formal semantics by taking into account axiomatic rules from real arithmetic, such as associativity, distributivity and others, in tandem with program equivalence rules that enable control-flow restructuring and eliminate redundant array accesses. For the first time, we bring rigorous approaches from software static analysis, specifically formal semantics and abstract interpretation, to bear on program transformation for high-level synthesis. New abstract semantics are developed to generate a computable subset of equivalent MIRs from an original MIR. Using formal semantics, three objectives are calculated for each MIR representing a pipelined numerical program: the accuracy of computation and an estimate of resource utilization in FPGA and the latency of program execution. The optimization of these objectives produces a Pareto frontier consisting of a set of equivalent MIRs. We thus go beyond existing literature by not only optimizing the precision requirements of an implementation, but changing the structure of the implementation itself. Using SOAP to optimize the structure of a variety of real world and artificially generated arithmetic expressions in single precision, we improve either their accuracy or the resource utilization by up to 60%. When applied to a suite of computational intensive numerical programs from PolyBench and Livermore Loops benchmarks, SOAP has generated circuits that enjoy up to a 12x speedup, with a simultaneous 7x increase in accuracy, at a cost of up to 4x more LUTs.Open Acces

    Density-Aware Linear Algebra in a Column-Oriented In-Memory Database System

    Get PDF
    Linear algebra operations appear in nearly every application in advanced analytics, machine learning, and of various science domains. Until today, many data analysts and scientists tend to use statistics software packages or hand-crafted solutions for their analysis. In the era of data deluge, however, the external statistics packages and custom analysis programs that often run on single-workstations are incapable to keep up with the vast increase in data volume and size. In particular, there is an increasing demand of scientists for large scale data manipulation, orchestration, and advanced data management capabilities. These are among the key features of a mature relational database management system (DBMS). With the rise of main memory database systems, it now has become feasible to also consider applications that built up on linear algebra. This thesis presents a deep integration of linear algebra functionality into an in-memory column-oriented database system. In particular, this work shows that it has become feasible to execute linear algebra queries on large data sets directly in a DBMS-integrated engine (LAPEG), without the need of transferring data and being restricted by hard disc latencies. From various application examples that are cited in this work, we deduce a number of requirements that are relevant for a database system that includes linear algebra functionality. Beside the deep integration of matrices and numerical algorithms, these include optimization of expressions, transparent matrix handling, scalability and data-parallelism, and data manipulation capabilities. These requirements are addressed by our linear algebra engine. In particular, the core contributions of this thesis are: firstly, we show that the columnar storage layer of an in-memory DBMS yields an easy adoption of efficient sparse matrix data types and algorithms. Furthermore, we show that the execution of linear algebra expressions significantly benefits from different techniques that are inspired from database technology. In a novel way, we implemented several of these optimization strategies in LAPEG’s optimizer (SpMachO), which uses an advanced density estimation method (SpProdest) to predict the matrix density of intermediate results. Moreover, we present an adaptive matrix data type AT Matrix to obviate the need of scientists for selecting appropriate matrix representations. The tiled substructure of AT Matrix is exploited by our matrix multiplication to saturate the different sockets of a multicore main-memory platform, reaching up to a speed-up of 6x compared to alternative approaches. Finally, a major part of this thesis is devoted to the topic of data manipulation; where we propose a matrix manipulation API and present different mutable matrix types to enable fast insertions and deletes. We finally conclude that our linear algebra engine is well-suited to process dynamic, large matrix workloads in an optimized way. In particular, the DBMS-integrated LAPEG is filling the linear algebra gap, and makes columnar in-memory DBMS attractive as efficient, scalable ad-hoc analysis platform for scientists

    Extended Abstracts: PMCCS3: Third International Workshop on Performability Modeling of Computer and Communication Systems

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryThe pages of the front matter that are missing from the PDF were blank
    corecore