21,667 research outputs found

    A Library for Pattern-based Sparse Matrix Vector Multiply

    Get PDF
    Pattern-based Representation (PBR) is a novel approach to improving the performance of Sparse Matrix-Vector Multiply (SMVM) numerical kernels. Motivated by our observation that many matrices can be divided into blocks that share a small number of distinct patterns, we generate custom multiplication kernels for frequently recurring block patterns. The resulting reduction in index overhead significantly reduces memory bandwidth requirements and improves performance. Unlike existing methods, PBR requires neither detection of dense blocks nor zero filling, making it particularly advantageous for matrices that lack dense nonzero concentrations. SMVM kernels for PBR can benefit from explicit prefetching and vectorization, and are amenable to parallelization. The analysis and format conversion to PBR is implemented as a library, making it suitable for applications that generate matrices dynamically at runtime. We present sequential and parallel performance results for PBR on two current multicore architectures, which show that PBR outperforms available alternatives for the matrices to which it is applicable, and that the analysis and conversion overhead is amortized in realistic application scenarios

    Using fast and accurate simulation to explore hardware/software trade-offs in the multi-core era

    Get PDF
    Writing well-performing parallel programs is challenging in the multi-core processor era. In addition to achieving good per-thread performance, which in itself is a balancing act between instruction-level parallelism, pipeline effects and good memory performance, multi-threaded programs complicate matters even further. These programs require synchronization, and are affected by the interactions between threads through sharing of both processor resources and the cache hierarchy. At the Intel Exascience Lab, we are developing an architectural simulator called Sniper for simulating future exascale-era multi-core processors. Its goal is twofold: Sniper should assist hardware designers to make design decisions, while simultaneously providing software designers with a tool to gain insight into the behavior of their algorithms and allow for optimization. By taking architectural features into account, our simulator can provide more insight into parallel programs than what can be obtained from existing performance analysis tools. This unique combination of hardware simulator and software performance analysis tool makes Sniper a useful tool for a simultaneous exploration of the hardware and software design space for future high-performance multi-core systems

    Self-organising comprehensive handover strategy for multi-tier LTE-advanced heterogeneous networks

    Get PDF
    Long term evolution (LTE)-advanced was introduced as real fourth generation (4G) with its new features and additional functions, satisfying the growing demands of quality and network coverage for the network operators' subscribers. The term muti-tier has also been recently used with respect to the heterogeneity of the network by applying the various subnetwork cooperative systems and functionalities with self-organising capabilities. Using indoor short-range low-power cellular base stations, for example, femtocells, in cooperation with existing long-range macrocells are considered as the key technical challenge of this multi-tier configuration. Furthermore, shortage of network spectrum is a major concern for network operators which forces them to spend additional attentions to overcome the degradation in performance and quality of services in 4G HetNets. This study investigates handover between the different layers of a heterogeneous LTE-advanced system, as a critical attribute to plan the best way of interactive coordination within the network for the proposed HetNet. The proposed comprehensive handover algorithm takes multiple factors in both handover sensing and decision stages, based on signal power reception, resource availability and handover optimisation, as well as prioritisation among macro and femto stations, to obtain maximum signal quality while avoiding unnecessary handovers

    Multicore-optimized wavefront diamond blocking for optimizing stencil updates

    Full text link
    The importance of stencil-based algorithms in computational science has focused attention on optimized parallel implementations for multilevel cache-based processors. Temporal blocking schemes leverage the large bandwidth and low latency of caches to accelerate stencil updates and approach theoretical peak performance. A key ingredient is the reduction of data traffic across slow data paths, especially the main memory interface. In this work we combine the ideas of multi-core wavefront temporal blocking and diamond tiling to arrive at stencil update schemes that show large reductions in memory pressure compared to existing approaches. The resulting schemes show performance advantages in bandwidth-starved situations, which are exacerbated by the high bytes per lattice update case of variable coefficients. Our thread groups concept provides a controllable trade-off between concurrency and memory usage, shifting the pressure between the memory interface and the CPU. We present performance results on a contemporary Intel processor

    GPU Acceleration of Image Convolution using Spatially-varying Kernel

    Full text link
    Image subtraction in astronomy is a tool for transient object discovery such as asteroids, extra-solar planets and supernovae. To match point spread functions (PSFs) between images of the same field taken at different times a convolution technique is used. Particularly suitable for large-scale images is a computationally intensive spatially-varying kernel. The underlying algorithm is inherently massively parallel due to unique kernel generation at every pixel location. The spatially-varying kernel cannot be efficiently computed through the Convolution Theorem, and thus does not lend itself to acceleration by Fast Fourier Transform (FFT). This work presents results of accelerated implementation of the spatially-varying kernel image convolution in multi-cores with OpenMP and graphic processing units (GPUs). Typical speedups over ANSI-C were a factor of 50 and a factor of 1000 over the initial IDL implementation, demonstrating that the techniques are a practical and high impact path to terabyte-per-night image pipelines and petascale processing.Comment: 4 pages. Accepted to IEEE-ICIP 201
    • …
    corecore