Skip to main content
Article thumbnail
Location of Repository

Accelerate Large-Scale Iterative Computation through Asynchronous Accumulative Updates

By Yanfeng Zhang, Qixin Gao, Lixin Gao and Cuirong Wang

Abstract

Myriad of data mining algorithms in scientific computing require parsing data sets iteratively. These iterative algorithms have to be implemented in a distributed environment to scale to massive data sets. To accelerate iterative computations in a large-scale distributed environment, we identify a broad class of iterative computations that can accumulate iterative update results. Specifically, different from traditional iterative computations, which iteratively update the result based on the result from the previous iteration, accumulative iterative update accumulates the intermediate iterative update results. We prove that an accumulative update will yield the same result as its corresponding traditional iterative update. Furthermore, accumulative iterative computation can be performed asynchronously and converges much faster. We present a general computation model to describe asynchronous accumulative iterative computation. Based on the computation model, we design and implement a distributed framework, Maiter. We evaluate Maiter on Amazon EC2 Cloud with 100 EC2 instances. Our results show that Maiter achieves as much as 60x speedup over Hadoop for implementing iterative algorithms

Topics: Categories and Subject Descriptors C.2.4 [Distributed Systems, Distributed applications General Terms Algorithms, Design, Theory, Performance Keywords asynchronous accumulative update
Year: 2013
OAI identifier: oai:CiteSeerX.psu:10.1.1.353.1879
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://rio.ecs.umass.edu/mnilp... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.