1,532 research outputs found

    Exploiting Parallelization in Spatial Statistics: an Applied Survey using R.

    Get PDF
    Computing tasks may be parallelized top-down by splitting into per-node chunks when the tasks permit this kind of division, and particularly when there is little or no need for communication between the nodes. Another approach is to parallelize bottom-up, by the substitution of multi-threaded low-level functions for single-threaded ones in otherwise unchanged user-level functions. This survey examines the timings of typical spatial data analysis tasks across a range of data sizes and hardware under different combinations of these two approaches. Conclusions are drawn concerning choices of alternatives for parallelization, and attention is drawn to factors conditioning those choices.Statistical software; Parallelization; Optimized linear algebra subroutines; Multicore processors; Spatial statistics.

    Expression Templates Revisited: A Performance Analysis of the Current ET Methodology

    Full text link
    In the last decade, Expression Templates (ET) have gained a reputation as an efficient performance optimization tool for C++ codes. This reputation builds on several ET-based linear algebra frameworks focused on combining both elegant and high-performance C++ code. However, on closer examination the assumption that ETs are a performance optimization technique cannot be maintained. In this paper we demonstrate and explain the inability of current ET-based frameworks to deliver high performance for dense and sparse linear algebra operations, and introduce a new "smart" ET implementation that truly allows the combination of high performance code with the elegance and maintainability of a domain-specific language.Comment: 16 pages, 7 figure

    Architecture-Aware Configuration and Scheduling of Matrix Multiplication on Asymmetric Multicore Processors

    Get PDF
    Asymmetric multicore processors (AMPs) have recently emerged as an appealing technology for severely energy-constrained environments, especially in mobile appliances where heterogeneity in applications is mainstream. In addition, given the growing interest for low-power high performance computing, this type of architectures is also being investigated as a means to improve the throughput-per-Watt of complex scientific applications. In this paper, we design and embed several architecture-aware optimizations into a multi-threaded general matrix multiplication (gemm), a key operation of the BLAS, in order to obtain a high performance implementation for ARM big.LITTLE AMPs. Our solution is based on the reference implementation of gemm in the BLIS library, and integrates a cache-aware configuration as well as asymmetric--static and dynamic scheduling strategies that carefully tune and distribute the operation's micro-kernels among the big and LITTLE cores of the target processor. The experimental results on a Samsung Exynos 5422, a system-on-chip with ARM Cortex-A15 and Cortex-A7 clusters that implements the big.LITTLE model, expose that our cache-aware versions of gemm with asymmetric scheduling attain important gains in performance with respect to its architecture-oblivious counterparts while exploiting all the resources of the AMP to deliver considerable energy efficiency
    • …
    corecore