2,695 research outputs found

    Compiling Recurrences over Dense and Sparse Arrays

    Full text link
    Recurrence equations lie at the heart of many computational paradigms including dynamic programming, graph analysis, and linear solvers. These equations are often expensive to compute and much work has gone into optimizing them for different situations. The set of recurrence implementations is a large design space across the set of all recurrences (e.g., the Viterbi and Floyd-Warshall algorithms), the choice of data structures (e.g., dense and sparse matrices), and the set of different loop orders. Optimized library implementations do not exist for most points in this design space, and developers must therefore often manually implement and optimize recurrences. We present a general framework for compiling recurrence equations into native code corresponding to any valid point in this general design space. In this framework, users specify a system of recurrences, the type of data structures for storing the input and outputs, and a set of scheduling primitives for optimization. A greedy algorithm then takes this specification and lowers it into a native program that respects the dependencies inherent to the recurrence equation. We describe the compiler transformations necessary to lower this high-level specification into native parallel code for either sparse and dense data structures and provide an algorithm for determining whether the recurrence system is solvable with the provided scheduling primitives. We evaluate the performance and correctness of the generated code on various computational tasks from domains including dense and sparse matrix solvers, dynamic programming, graph problems, and sparse tensor algebra. We demonstrate that generated code has competitive performance to handwritten implementations in libraries

    A Comparative Study of Some Pseudorandom Number Generators

    Full text link
    We present results of an extensive test program of a group of pseudorandom number generators which are commonly used in the applications of physics, in particular in Monte Carlo simulations. The generators include public domain programs, manufacturer installed routines and a random number sequence produced from physical noise. We start by traditional statistical tests, followed by detailed bit level and visual tests. The computational speed of various algorithms is also scrutinized. Our results allow direct comparisons between the properties of different generators, as well as an assessment of the efficiency of the various test methods. This information provides the best available criterion to choose the best possible generator for a given problem. However, in light of recent problems reported with some of these generators, we also discuss the importance of developing more refined physical tests to find possible correlations not revealed by the present test methods.Comment: University of Helsinki preprint HU-TFT-93-22 (minor changes in Tables 2 and 7, and in the text, correspondingly

    Design and optimisation of scientific programs in a categorical language

    Get PDF
    This thesis presents an investigation into the use of advanced computer languages for scientific computing, an examination of performance issues that arise from using such languages for such a task, and a step toward achieving portable performance from compilers by attacking these problems in a way that compensates for the complexity of and differences between modern computer architectures. The language employed is Aldor, a functional language from computer algebra, and the scientific computing area is a subset of the family of iterative linear equation solvers applied to sparse systems. The linear equation solvers that are considered have much common structure, and this is factored out and represented explicitly in the lan-guage as a framework, by means of categories and domains. The flexibility introduced by decomposing the algorithms and the objects they act on into separate modules has a strong performance impact due to its negative effect on temporal locality. This necessi-tates breaking the barriers between modules to perform cross-component optimisation. In this instance the task reduces to one of collective loop fusion and array contrac

    About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Get PDF
    In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.Parallel Programming, Parallel Execution, Collaborative systems, Collaborative parallel execution

    Monte Carlo Based Seismic Hazard Model for Southern Ghana

    Get PDF
    Seismic hazard assessment involves quantifying the likely ground motion intensities to be expected at a particular site or region. It is a crucial aspect of any seismic hazard mitigation program. The conventional probabilistic seismic hazard assessment is highly reliant on the past seismic activities in a particular region. However, for regions with lower rates of seismicity, where seismological data is scanty, it would seem desirable to use a stochastic modelling (Monte Carlo based) approach. This study presents a Monte Carlo simulation hazard model for Southern Ghana. Six sites are selected in order to determine their expected ground motion intensities (peak ground acceleration and spectral acceleration). Results revealed that Accra and Tema as the highly seismic cities in Southern Ghana, with Ho and Cape Coast having relatively lower seismicities. The expected peak ground acceleration corresponding to a 10% probability of exceedance in 50 years for the proposed seismic hazard model was as high as 0.06 g for the cities considered. However, at the rather extreme 2% probability of exceedance in 50 years, a PGA of 0.5 g can be anticipated. Evidently, the 2% in 50 years uniform hazard spectrum for the highly seismic cities recorded high spectral accelerations, at a natural vibrational period within the ranges of about 0.1-0.3 sec. This indicates that low-rise structures in these cities may be exposed to high seismic risk

    Active fault databases: building a bridge between earthquake geologists and seismic hazard practitioners, the case of the QAFI v.3 database

    Get PDF
    Active fault databases are a very powerful and useful tool in seismic hazard assessment, particularly when singular faults are considered seismogenic sources. Active fault databases are also a very relevant source of information for earth scientists, earthquake engineers and even teachers or journalists. Hence, active fault databases should be updated and thoroughly reviewed on a regular basis in order to keep a standard quality and uniformed criteria. Desirably, active fault databases should somehow indicate the quality of the geological data and, particularly, the reliability attributed to crucial fault-seismic parameters, such as maximum magnitude and recurrence interval. In this paper we explain how we tackled these issues during the process of updating and reviewing the Quaternary Active Fault Database of Iberia (QAFI) to its current version 3. We devote particular attention to describing the scheme devised for classifying the quality and representativeness of the geological evidence of Quaternary activity and the accuracy of the slip rate estimation in the database. Subsequently, we use this information as input for a straightforward rating of the level of reliability of maximum magnitude and recurrence interval fault seismic parameters. We conclude that QAFI v.3 is a much better database than version 2 either for proper use in seismic hazard applications or as an informative source for non-specialized users. However, we already envision new improvements for a future update

    USRA/RIACS

    Get PDF
    The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on 6 June 1983. RIACS is privately operated by USRA, a consortium of universities with research programs in the aerospace sciences, under a cooperative agreement with NASA. The primary mission of RIACS is to provide research and expertise in computer science and scientific computing to support the scientific missions of NASA ARC. The research carried out at RIACS must change its emphasis from year to year in response to NASA ARC's changing needs and technological opportunities. A flexible scientific staff is provided through a university faculty visitor program, a post doctoral program, and a student visitor program. Not only does this provide appropriate expertise but it also introduces scientists outside of NASA to NASA problems. A small group of core RIACS staff provides continuity and interacts with an ARC technical monitor and scientific advisory group to determine the RIACS mission. RIACS activities are reviewed and monitored by a USRA advisory council and ARC technical monitor. Research at RIACS is currently being done in the following areas: Parallel Computing; Advanced Methods for Scientific Computing; Learning Systems; High Performance Networks and Technology; Graphics, Visualization, and Virtual Environments
    • 

    corecore