22,410 research outputs found

    Multi-start local search algorithms on GPU

    Get PDF
    International audienceIn practice, combinatorial optimization problems are complex and computationally time-intensive. Even if local search (LS) algorithms allow to significantly reduce the computation time cost of the solution exploration space, the use of parallelism is required to accelerate the search process. Indeed, LSs are inherently parallel and three parallel models are often used to solve efficiently large combinatorial problems: algorithmic-level (multi-start model), iteration-level (parallel evaluation of the neighborhood), and the solution-level (parallel evaluation of a single solution). The main objective of this paper is to deal with the algorithmic-level on GPU architectures where many LSs are executed in parallel. More exactly, we propose to study different schemes of deployment for the design of multi-start LSs on GPU based on popular hill climbing (HC), simulated annealing (SA) and tabu search (TS)

    Acceleration of stereo-matching on multi-core CPU and GPU

    Get PDF
    This paper presents an accelerated version of a dense stereo-correspondence algorithm for two different parallelism enabled architectures, multi-core CPU and GPU. The algorithm is part of the vision system developed for a binocular robot-head in the context of the CloPeMa 1 research project. This research project focuses on the conception of a new clothes folding robot with real-time and high resolution requirements for the vision system. The performance analysis shows that the parallelised stereo-matching algorithm has been significantly accelerated, maintaining 12x and 176x speed-up respectively for multi-core CPU and GPU, compared with non-SIMD singlethread CPU. To analyse the origin of the speed-up and gain deeper understanding about the choice of the optimal hardware, the algorithm was broken into key sub-tasks and the performance was tested for four different hardware architectures

    Speculative Segmented Sum for Sparse Matrix-Vector Multiplication on Heterogeneous Processors

    Full text link
    Sparse matrix-vector multiplication (SpMV) is a central building block for scientific software and graph applications. Recently, heterogeneous processors composed of different types of cores attracted much attention because of their flexible core configuration and high energy efficiency. In this paper, we propose a compressed sparse row (CSR) format based SpMV algorithm utilizing both types of cores in a CPU-GPU heterogeneous processor. We first speculatively execute segmented sum operations on the GPU part of a heterogeneous processor and generate a possibly incorrect results. Then the CPU part of the same chip is triggered to re-arrange the predicted partial sums for a correct resulting vector. On three heterogeneous processors from Intel, AMD and nVidia, using 20 sparse matrices as a benchmark suite, the experimental results show that our method obtains significant performance improvement over the best existing CSR-based SpMV algorithms. The source code of this work is downloadable at https://github.com/bhSPARSE/Benchmark_SpMV_using_CSRComment: 22 pages, 8 figures, Published at Parallel Computing (PARCO
    corecore