1,113 research outputs found
SOLUTIONS FOR OPTIMIZING THE DATA PARALLEL PREFIX SUM ALGORITHM USING THE COMPUTE UNIFIED DEVICE ARCHITECTURE
In this paper, we analyze solutions for optimizing the data parallel prefix sum function using the Compute Unified Device Architecture (CUDA) that provides a viable solution for accelerating a broad class of applications. The parallel prefix sum function is an essential building block for many data mining algorithms, and therefore its optimization facilitates the whole data mining process. Finally, we benchmark and evaluate the performance of the optimized parallel prefix sum building block in CUDA.CUDA, threads, GPGPU, parallel prefix sum, parallel processing, task synchronization, warp
Algorithms for Extracting Frequent Episodes in the Process of Temporal Data Mining
An important aspect in the data mining process is the discovery of patterns having a great influence on the studied problem. The purpose of this paper is to study the frequent episodes data mining through the use of parallel pattern discovery algorithms. Parallel pattern discovery algorithms offer better performance and scalability, so they are of a great interest for the data mining research community. In the following, there will be highlighted some parallel and distributed frequent pattern mining algorithms on various platforms and it will also be presented a comparative study of their main features. The study takes into account the new possibilities that arise along with the emerging novel Compute Unified Device Architecture from the latest generation of graphics processing units. Based on their high performance, low cost and the increasing number of features offered, GPU processors are viable solutions for an optimal implementation of frequent pattern mining algorithmsFrequent Pattern Mining, Parallel Computing, Dynamic Load Balancing, Temporal Data Mining, CUDA, GPU, Fermi, Thread
High-speed detection of emergent market clustering via an unsupervised parallel genetic algorithm
We implement a master-slave parallel genetic algorithm (PGA) with a bespoke
log-likelihood fitness function to identify emergent clusters within price
evolutions. We use graphics processing units (GPUs) to implement a PGA and
visualise the results using disjoint minimal spanning trees (MSTs). We
demonstrate that our GPU PGA, implemented on a commercially available general
purpose GPU, is able to recover stock clusters in sub-second speed, based on a
subset of stocks in the South African market. This represents a pragmatic
choice for low-cost, scalable parallel computing and is significantly faster
than a prototype serial implementation in an optimised C-based
fourth-generation programming language, although the results are not directly
comparable due to compiler differences. Combined with fast online intraday
correlation matrix estimation from high frequency data for cluster
identification, the proposed implementation offers cost-effective,
near-real-time risk assessment for financial practitioners.Comment: 10 pages, 5 figures, 4 tables, More thorough discussion of
implementatio
Performance Analysis of a Novel GPU Computation-to-core Mapping Scheme for Robust Facet Image Modeling
Though the GPGPU concept is well-known
in image processing, much more work remains to be done
to fully exploit GPUs as an alternative computation
engine. This paper investigates the computation-to-core
mapping strategies to probe the efficiency and scalability
of the robust facet image modeling algorithm on GPUs.
Our fine-grained computation-to-core mapping scheme
shows a significant performance gain over the standard
pixel-wise mapping scheme. With in-depth performance
comparisons across the two different mapping schemes,
we analyze the impact of the level of parallelism on
the GPU computation and suggest two principles for
optimizing future image processing applications on the
GPU platform
First Evaluation of the CPU, GPGPU and MIC Architectures for Real Time Particle Tracking based on Hough Transform at the LHC
Recent innovations focused around {\em parallel} processing, either through
systems containing multiple processors or processors containing multiple cores,
hold great promise for enhancing the performance of the trigger at the LHC and
extending its physics program. The flexibility of the CMS/ATLAS trigger system
allows for easy integration of computational accelerators, such as NVIDIA's
Tesla Graphics Processing Unit (GPU) or Intel's \xphi, in the High Level
Trigger. These accelerators have the potential to provide faster or more energy
efficient event selection, thus opening up possibilities for new complex
triggers that were not previously feasible. At the same time, it is crucial to
explore the performance limits achievable on the latest generation multicore
CPUs with the use of the best software optimization methods. In this article, a
new tracking algorithm based on the Hough transform will be evaluated for the
first time on a multi-core Intel Xeon E5-2697v2 CPU, an NVIDIA Tesla K20c GPU,
and an Intel \xphi\ 7120 coprocessor. Preliminary time performance will be
presented.Comment: 13 pages, 4 figures, Accepted to JINS
Parallel 3D Fast Wavelet Transform comparison on CPUs and GPUs
We present in this paper several implementations of the 3D Fast Wavelet Transform (3D-FWT) on multicore CPUs and manycore GPUs. On the GPU side, we focus on CUDA and OpenCL programming to develop methods for an efficient mapping on manycores. On multicore CPUs, OpenMP and Pthreads are used as counterparts to maximize parallelism, and renowned techniques like tiling and blocking are exploited to optimize the use of memory. We evaluate these proposals and make a comparison between a new Fermi Tesla C2050 and an Intel Core 2 QuadQ6700. Speedups of the CUDA version are the best results, improving the execution times on CPU, ranging from 5.3x to 7.4x for different image sizes, and up to 81 times faster when communications are neglected. Meanwhile, OpenCL obtains solid gains which range from 2x factors on small frame sizes to 3x factors on larger ones
High Performance Twitter Sentiment Analysis Using CUDA Based Distance Kernel on GPUs
Sentiment analysis techniques are widely used for extracting feelings of users in different domains such as social media content, surveys, and user reviews. This is mostly performed by using classical text classification techniques. One of the major challenges in this field is having a large and sparse feature space that stems from sparse representation of texts. The high dimensionality of the feature space creates a serious problem in terms of time and performance for sentiment analysis. This is particularly important when selected classifier requires intense calculations as in k-NN. To cope with this problem, we used sentiment analysis techniques for Turkish Twitter feeds using the NVIDIAâs CUDA technology. We employed our CUDA-based distance kernel implementation for k-NN which is a widely used lazy classifier in this field. We conducted our experiments on four machines with different computing capacities in terms of GPU and CPU configuration to analyze the impact on speed-up
- âŠ