6 research outputs found
๋๊ท๋ชจ ๊ทธ๋ํ ๋ง์ด๋์ ์ํ ์ฌ์ ๊ทธ๋ํ ๋ถํ ๊ธฐ๋ฐ ํ๋ ฌ-๋ฒกํฐ ๊ณฑ์
ํ์๋
ผ๋ฌธ (์์ฌ)-- ์์ธ๋ํ๊ต ๋ํ์ : ๊ณต๊ณผ๋ํ ์ปดํจํฐ๊ณตํ๋ถ, 2018. 2. ๊ฐ์ .How can we analyze enormous networks including the Web and social networks which have hundreds of billions of nodes and edges?
Network analyses have been conducted by various graph mining methods including shortest path computation, PageRank, connected component computation, random walk with restart, etc.
These graph mining methods can be expressed as generalized matrix-vector multiplication which consists of few operations inspired by typical matrix-vector multiplication.
Recently, several graph processing systems based on matrix-vector multiplication or their own primitives have been proposed to deal with large graphshowever, they all have failed on Web-scale graphs due to insufficient memory space or the lack of consideration for I/O costs.
In this thesis, we propose PMV (Pre-partitioned generalized Matrix-Vector multiplication), a scalable distributed graph mining method based on generalized matrix-vector multiplication on distributed systems.
PMV significantly decreases the communication cost, which is the main bottleneck of distributed systems, by partitioning the input graph in advance and judiciously applying execution strategies based on the density of the pre-partitioned sub-matrices.
Experiments show that PMV succeeds in processing up to 16x larger graphs than existing distributed memory-based graph mining methods, and requires 9x less time than previous disk-based graph mining methods by reducing I/O costs significantly.I.Introduction 1
II.Background and Related Works 4
2.1 Large-scale Graph Processing Systems 4
2.2 MapReduce and Spark 6
2.3 Generalized Iterative Matrix-Vector Multiplication (GIM-V) 7
2.4 Applications of GIM-V 9
2.4.1 PageRank 9
2.4.2 Random Walk with Restart 9
2.4.3 Connected Component 10
2.4.4 Single Source Shortest Path 10
2.4.5 K-step Neighbors 11
2.4.6 Diameter Estimation 11
III.Proposed Method 13
3.1 PMV: Pre-partitioned Generalized Matrix-Vector Multiplication 14
3.1.1 Pre-partitioning 14
3.1.2 Iterative Multiplication 15
3.2 PMV horizontal : Horizontal Matrix Placement 17
3.3 PMV vertical : Vertical Matrix Placement 18
3.4 PMV selective : Selecting Best Method between PMV horizontal and PMV vertical 19
3.5 PMV hybrid : Using PMV horizontal and PMV vertical Together 21
3.6 Implementation 25
3.6.1 PMV on Hadoop 25
3.6.2 PMV on Spark 26
IV. Experiments 27
4.1 Datasets 27
4.2 Environment 28
4.3 Performance of PMV 29
4.4 Effect of Matrix Density 30
4.5 Effect of Threshold ฮธ 31
4.6 Machine Scalability 32
4.7 Underlying Engine 33
V. Conclusion 35
References 36
Abstract in Korean 42Maste