12,028 research outputs found
Algorithms for Extracting Frequent Episodes in the Process of Temporal Data Mining
An important aspect in the data mining process is the discovery of patterns having a great influence on the studied problem. The purpose of this paper is to study the frequent episodes data mining through the use of parallel pattern discovery algorithms. Parallel pattern discovery algorithms offer better performance and scalability, so they are of a great interest for the data mining research community. In the following, there will be highlighted some parallel and distributed frequent pattern mining algorithms on various platforms and it will also be presented a comparative study of their main features. The study takes into account the new possibilities that arise along with the emerging novel Compute Unified Device Architecture from the latest generation of graphics processing units. Based on their high performance, low cost and the increasing number of features offered, GPU processors are viable solutions for an optimal implementation of frequent pattern mining algorithmsFrequent Pattern Mining, Parallel Computing, Dynamic Load Balancing, Temporal Data Mining, CUDA, GPU, Fermi, Thread
Real-time High Resolution Fusion of Depth Maps on GPU
A system for live high quality surface reconstruction using a single moving
depth camera on a commodity hardware is presented. High accuracy and real-time
frame rate is achieved by utilizing graphics hardware computing capabilities
via OpenCL and by using sparse data structure for volumetric surface
representation. Depth sensor pose is estimated by combining serial texture
registration algorithm with iterative closest points algorithm (ICP) aligning
obtained depth map to the estimated scene model. Aligned surface is then fused
into the scene. Kalman filter is used to improve fusion quality. Truncated
signed distance function (TSDF) stored as block-based sparse buffer is used to
represent surface. Use of sparse data structure greatly increases accuracy of
scanned surfaces and maximum scanning area. Traditional GPU implementation of
volumetric rendering and fusion algorithms were modified to exploit sparsity to
achieve desired performance. Incorporation of texture registration for sensor
pose estimation and Kalman filter for measurement integration improved accuracy
and robustness of scanning process
Acceleration of stereo-matching on multi-core CPU and GPU
This paper presents an accelerated version of a
dense stereo-correspondence algorithm for two different parallelism
enabled architectures, multi-core CPU and GPU. The
algorithm is part of the vision system developed for a binocular
robot-head in the context of the CloPeMa 1 research project.
This research project focuses on the conception of a new clothes
folding robot with real-time and high resolution requirements
for the vision system. The performance analysis shows that
the parallelised stereo-matching algorithm has been significantly
accelerated, maintaining 12x and 176x speed-up respectively
for multi-core CPU and GPU, compared with non-SIMD singlethread
CPU. To analyse the origin of the speed-up and gain
deeper understanding about the choice of the optimal hardware,
the algorithm was broken into key sub-tasks and the performance
was tested for four different hardware architectures
GPUs as Storage System Accelerators
Massively multicore processors, such as Graphics Processing Units (GPUs),
provide, at a comparable price, a one order of magnitude higher peak
performance than traditional CPUs. This drop in the cost of computation, as any
order-of-magnitude drop in the cost per unit of performance for a class of
system components, triggers the opportunity to redesign systems and to explore
new ways to engineer them to recalibrate the cost-to-performance relation. This
project explores the feasibility of harnessing GPUs' computational power to
improve the performance, reliability, or security of distributed storage
systems. In this context, we present the design of a storage system prototype
that uses GPU offloading to accelerate a number of computationally intensive
primitives based on hashing, and introduce techniques to efficiently leverage
the processing power of GPUs. We evaluate the performance of this prototype
under two configurations: as a content addressable storage system that
facilitates online similarity detection between successive versions of the same
file and as a traditional system that uses hashing to preserve data integrity.
Further, we evaluate the impact of offloading to the GPU on competing
applications' performance. Our results show that this technique can bring
tangible performance gains without negatively impacting the performance of
concurrently running applications.Comment: IEEE Transactions on Parallel and Distributed Systems, 201
Developing serious games for cultural heritage: a state-of-the-art review
Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result, the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented
Master slave en-face OCT/SLO
Master Slave optical coherence tomography (MS-OCT) is an OCT method that does not require resampling of data and can be used to deliver en-face images from several depths simultaneously. As the MS-OCT method requires important computational resources, the number of multiple depth en-face images that can be produced in real-time is limited. Here, we demonstrate progress in taking advantage of the parallel processing feature of the MS-OCT technology. Harnessing the capabilities of graphics processing units (GPU)s, information from 384 depth positions is acquired in one raster with real time display of up to 40 en-face OCT images. These exhibit comparable resolution and sensitivity to the images produced using the conventional Fourier domain based method. The GPU facilitates versatile real time selection of parameters, such as the depth positions of the 40 images out of the set of 384 depth locations, as well as their axial resolution. In each updated displayed frame, in parallel with the 40 en-face OCT images, a scanning laser ophthalmoscopy (SLO) lookalike image is presented together with two B-scan OCT images oriented along rectangular directions. The thickness of the SLO lookalike image is dynamically determined by the choice of number of en-face OCT images displayed in the frame and the choice of differential axial distance between them
- …