2 research outputs found
Faster and Cheaper: Parallelizing Large-Scale Matrix Factorization on GPUs
Matrix factorization (MF) is employed by many popular algorithms, e.g.,
collaborative filtering. The emerging GPU technology, with massively multicore
and high intra-chip memory bandwidth but limited memory capacity, presents an
opportunity for accelerating MF much further when appropriately exploiting the
GPU architectural characteristics.
This paper presents cuMF, a CUDA-based matrix factorization library that
implements memory-optimized alternate least square (ALS) method to solve very
large-scale MF. CuMF uses a variety set of techniques to maximize the
performance on either single or multiple GPUs. These techniques include smart
access of sparse data leveraging GPU memory hierarchy, using data parallelism
in conjunction with model parallelism, minimizing the communication overhead
between computing units, and utilizing a novel topology-aware parallel
reduction scheme.
With only a single machine with four Nvidia GPU cards, cuMF can be 6-10 times
as fast, and 33-100 times as cost-efficient, compared with the state-of-art
distributed CPU solutions. Moreover, this cuMF can solve the largest matrix
factorization problem ever reported yet in current literature, while
maintaining impressively good performance.Comment: 12 pages, 11 figure
Parallel and Distributed Collaborative Filtering: A Survey
Collaborative filtering is amongst the most preferred techniques when
implementing recommender systems. Recently, great interest has turned towards
parallel and distributed implementations of collaborative filtering algorithms.
This work is a survey of the parallel and distributed collaborative filtering
implementations, aiming not only to provide a comprehensive presentation of the
field's development, but also to offer future research orientation by
highlighting the issues that need to be further developed.Comment: 46 page