4,205 research outputs found
Acceleration of stereo-matching on multi-core CPU and GPU
This paper presents an accelerated version of a
dense stereo-correspondence algorithm for two different parallelism
enabled architectures, multi-core CPU and GPU. The
algorithm is part of the vision system developed for a binocular
robot-head in the context of the CloPeMa 1 research project.
This research project focuses on the conception of a new clothes
folding robot with real-time and high resolution requirements
for the vision system. The performance analysis shows that
the parallelised stereo-matching algorithm has been significantly
accelerated, maintaining 12x and 176x speed-up respectively
for multi-core CPU and GPU, compared with non-SIMD singlethread
CPU. To analyse the origin of the speed-up and gain
deeper understanding about the choice of the optimal hardware,
the algorithm was broken into key sub-tasks and the performance
was tested for four different hardware architectures
A portable platform for accelerated PIC codes and its application to GPUs using OpenACC
We present a portable platform, called PIC_ENGINE, for accelerating
Particle-In-Cell (PIC) codes on heterogeneous many-core architectures such as
Graphic Processing Units (GPUs). The aim of this development is efficient
simulations on future exascale systems by allowing different parallelization
strategies depending on the application problem and the specific architecture.
To this end, this platform contains the basic steps of the PIC algorithm and
has been designed as a test bed for different algorithmic options and data
structures. Among the architectures that this engine can explore, particular
attention is given here to systems equipped with GPUs. The study demonstrates
that our portable PIC implementation based on the OpenACC programming model can
achieve performance closely matching theoretical predictions. Using the Cray
XC30 system, Piz Daint, at the Swiss National Supercomputing Centre (CSCS), we
show that PIC_ENGINE running on an NVIDIA Kepler K20X GPU can outperform the
one on an Intel Sandybridge 8-core CPU by a factor of 3.4
Surface Reconstruction from Scattered Point via RBF Interpolation on GPU
In this paper we describe a parallel implicit method based on radial basis
functions (RBF) for surface reconstruction. The applicability of RBF methods is
hindered by its computational demand, that requires the solution of linear
systems of size equal to the number of data points. Our reconstruction
implementation relies on parallel scientific libraries and is supported for
massively multi-core architectures, namely Graphic Processor Units (GPUs). The
performance of the proposed method in terms of accuracy of the reconstruction
and computing time shows that the RBF interpolant can be very effective for
such problem.Comment: arXiv admin note: text overlap with arXiv:0909.5413 by other author
A Streaming Multi-GPU Implementation of Image Simulation Algorithms for Scanning Transmission Electron Microscopy
Simulation of atomic resolution image formation in scanning transmission
electron microscopy can require significant computation times using traditional
methods. A recently developed method, termed plane-wave reciprocal-space
interpolated scattering matrix (PRISM), demonstrates potential for significant
acceleration of such simulations with negligible loss of accuracy. Here we
present a software package called Prismatic for parallelized simulation of
image formation in scanning transmission electron microscopy (STEM) using both
the PRISM and multislice methods. By distributing the workload between multiple
CUDA-enabled GPUs and multicore processors, accelerations as high as 1000x for
PRISM and 30x for multislice are achieved relative to traditional multislice
implementations using a single 4-GPU machine. We demonstrate a potentially
important application of Prismatic, using it to compute images for atomic
electron tomography at sufficient speeds to include in the reconstruction
pipeline. Prismatic is freely available both as an open-source CUDA/C++ package
with a graphical user interface and as a Python package, PyPrismatic
Recommended from our members
Preparing sparse solvers for exascale computing.
Sparse solvers provide essential functionality for a wide variety of scientific applications. Highly parallel sparse solvers are essential for continuing advances in high-fidelity, multi-physics and multi-scale simulations, especially as we target exascale platforms. This paper describes the challenges, strategies and progress of the US Department of Energy Exascale Computing project towards providing sparse solvers for exascale computing platforms. We address the demands of systems with thousands of high-performance node devices where exposing concurrency, hiding latency and creating alternative algorithms become essential. The efforts described here are works in progress, highlighting current success and upcoming challenges. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'
- …