2,598 research outputs found

    Systolic and Hyper-Systolic Algorithms for the Gravitational N-Body Problem, with an Application to Brownian Motion

    Full text link
    A systolic algorithm rhythmically computes and passes data through a network of processors. We investigate the performance of systolic algorithms for implementing the gravitational N-body problem on distributed-memory computers. Systolic algorithms minimize memory requirements by distributing the particles between processors. We show that the performance of systolic routines can be greatly enhanced by the use of non-blocking communication, which allows particle coordinates to be communicated at the same time that force calculations are being carried out. Hyper-systolic algorithms reduce the communication complexity at the expense of increased memory demands. As an example of an application requiring large N, we use the systolic algorithm to carry out direct-summation simulations using 10^6 particles of the Brownian motion of the supermassive black hole at the center of the Milky Way galaxy. We predict a 3D random velocity of 0.4 km/s for the black hole.Comment: 33 pages, 10 postscript figure

    A Study of Separations in Cryptography: New Results and New Models

    Get PDF
    For more than 20 years, black-box impossibility results have been used to argue the infeasibility of constructing certain cryptographic primitives (e.g., key agreement) from others (e.g., one-way functions). In this dissertation we further extend the frontier of this field by demonstrating several new impossibility results as well as a new framework for studying a more general class of constructions. Our first two results demonstrate impossibility of black-box constructions of two commonly used cryptographic primitives. In our first result we study the feasibility of black-box constructions of predicate encryption schemes from standard assumptions and demonstrate strong limitations on the types of schemes that can be constructed. In our second result we study black-box constructions of constant-round zero-knowledge proofs from one-way permutations and show that, under commonly believed complexity assumptions, no such constructions exist. A widely recognized limitation of black-box impossibility results, however, is that they say nothing about the usefulness of (known) non-black-box techniques. This state of affairs is unsatisfying as we would at least like to rule out constructions using the set of techniques we have at our disposal. With this motivation in mind, in the final result of this dissertation we propose a new framework for black-box constructions with a non-black-box flavor, specifically, those that rely on zero-knowledge proofs relative to some oracle. Our framework is powerful enough to capture a large class of known constructions, however we show that the original black-box separation of key agreement from one-way functions still holds even in this non-black-box setting that allows for zero-knowledge proofs

    Multiple Track Performance of a Digital Magnetic Tape System : Experimental Study and Simulation using Parallel Processing Techniques

    Get PDF
    The primary aim of the magnetic recording industry is to increase storage capacities and transfer rates whilst maintaining or reducing costs. In multiple-track tape systems, as recorded track dimensions decrease, higher precision tape transport mechanisms and dedicated coding circuitry are required. This leads to increased manufacturing costs and a loss of flexibility. This thesis reports on the performance of a low precision low-cost multiple-track tape transport system. Software based techniques to study system performance, and to compensate for the mechanical deficiencies of this system were developed using occam and the transputer. The inherent parallelism of the multiple-track format was exploited by integrating a transputer into the recording channel to perform the signal processing tasks. An innovative model of the recording channel, written exclusively in occam, was developed. The effect of parameters, such as data rate, track dimensions and head misregistration on system performance was determined from the detailed error profile produced. This model may be run on a network of transputers, allowing its speed of execution to be scaled to suit the investigation. These features, combined with its modular flexibility makes it a powerful tool that may be applied to other multiple-track systems, such as digital HDTV. A greater understanding of the effects of mechanical deficiencies on the performance of multiple-track systems was gained from this study. This led to the development of a software based compensation scheme to reduce the effects of Lateral Head Displacement and allow low-cost tape transport mechanisms to be used with narrow, closely spaced tracks, facilitating higher packing densities. The experimental and simulated investigation of system performance, the development of the model and compensation scheme using parallel processing techniques has led to the publication of a paper and two further publications are expected.Thorn EMI, Central Research Laboratories, Hayes, Middlese

    An O(log n) Time Common CRCW PRAM Algorithm for Minimum Spanning Tree

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryOffice of Naval Research / N00014-85-K-057

    Activities of the Institute for Computer Applications in Science and Engineering

    Get PDF
    Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, and computer science during the period April 1, 1985 through October 2, 1985 is summarized

    A Dynamic Scaling Methodology for Improving Performance of Big Data Systems

    Get PDF
    The continuous growth of data volume in various fields such as, healthcare, sciences, economics, and business has caused an overwhelming flow of data in the last decade. The overwhelming flow of data has raised challenges in processing, analyzing, and storing data, which lead many systems to face an issue in performance. Poor performance of systems creates negative impact such as delays, unprocessed data, and increasing response time. Processing huge amounts of data demands a powerful computational infrastructure to ensure that data processing and analysis success [7]. However, the architectures of these systems are not suitable to process that quantity of data. This calls for necessity to develop a methodology to improve the performance of systems handle massive amount of data. This thesis presents a novel dynamic scaling methodology to improve the performance of big data systems. The dynamic scaling methodology is developed to scale up the system based on the several aspects from the big data perspective. Moreover, these aspects are used by the helper project algorithm which is designed to divide a task into small chunks to be processed by the system. These small chunks run on several virtual machines to work in parallel to enhance the system’s runtime performance. In addition, the dynamic scaling methodology does not require many modifications on the applied, which makes it easy to use. The dynamic scaling methodology improves the performance of the big data system significantly. As a result, it provides a solution for performance failures in systems that process huge amount of data. This is study would be beneficial to IT researches that focus on performance of big data systems

    Symbolic analysis of bounded Petri nets

    Get PDF
    This paper presents a symbolic approach for the analysis of bounded Petri nets. The structure and behavior of the Petri net is symbolically modeled by using Boolean functions, thus reducing reasoning about Petri nets to Boolean calculation. The set of reachable markings is calculated by symbolically firing the transitions in the Petri net. Highly concurrent systems suffer from the state explosion problem produced by an exponential increase of the number of reachable states. This state explosion is handled by using Binary Decision Diagrams (BDDs) which are capable of representing large sets of markings with small data structures. Petri nets have the ability to model a large variety of systems and the flexibility to describe causality, concurrency, and conditional relations. The manipulation of vast state spaces generated by Petri nets enables the efficient analysis of a wide range of problems, e.g., deadlock freeness, liveness, and concurrency. A number of examples are presented in order to show how large reachability sets can be generated, represented, and analyzed with moderate BDD sizes. By using this symbolic framework, properties requiring an exhaustive analysis of the reachability graph can be efficiently verified.Peer ReviewedPostprint (published version
    • …
    corecore