6,197 research outputs found
Scalable Breadth-First Search on a GPU Cluster
On a GPU cluster, the ratio of high computing power to communication
bandwidth makes scaling breadth-first search (BFS) on a scale-free graph
extremely challenging. By separating high and low out-degree vertices, we
present an implementation with scalable computation and a model for scalable
communication for BFS and direction-optimized BFS. Our communication model uses
global reduction for high-degree vertices, and point-to-point transmission for
low-degree vertices. Leveraging the characteristics of degree separation, we
reduce the graph size to one third of the conventional edge list
representation. With several other optimizations, we observe linear weak
scaling as we increase the number of GPUs, and achieve 259.8 GTEPS on a
scale-33 Graph500 RMAT graph with 124 GPUs on the latest CORAL early access
system.Comment: 12 pages, 13 figures. To appear at IPDPS 201
An ontology enhanced parallel SVM for scalable spam filter training
This is the post-print version of the final paper published in Neurocomputing. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2013 Elsevier B.V.Spam, under a variety of shapes and forms, continues to inflict increased damage. Varying approaches including Support Vector Machine (SVM) techniques have been proposed for spam filter training and classification. However, SVM training is a computationally intensive process. This paper presents a MapReduce based parallel SVM algorithm for scalable spam filter training. By distributing, processing and optimizing the subsets of the training data across multiple participating computer nodes, the parallel SVM reduces the training time significantly. Ontology semantics are employed to minimize the impact of accuracy degradation when distributing the training data among a number of SVM classifiers. Experimental results show that ontology based augmentation improves the accuracy level of the parallel SVM beyond the original sequential counterpart
Introducing Molly: Distributed Memory Parallelization with LLVM
Programming for distributed memory machines has always been a tedious task,
but necessary because compilers have not been sufficiently able to optimize for
such machines themselves. Molly is an extension to the LLVM compiler toolchain
that is able to distribute and reorganize workload and data if the program is
organized in statically determined loop control-flows. These are represented as
polyhedral integer-point sets that allow program transformations applied on
them. Memory distribution and layout can be declared by the programmer as
needed and the necessary asynchronous MPI communication is generated
automatically. The primary motivation is to run Lattice QCD simulations on IBM
Blue Gene/Q supercomputers, but since the implementation is not yet completed,
this paper shows the capabilities on Conway's Game of Life
- …