4,908 research outputs found
Search Efficient Binary Network Embedding
Traditional network embedding primarily focuses on learning a dense vector
representation for each node, which encodes network structure and/or node
content information, such that off-the-shelf machine learning algorithms can be
easily applied to the vector-format node representations for network analysis.
However, the learned dense vector representations are inefficient for
large-scale similarity search, which requires to find the nearest neighbor
measured by Euclidean distance in a continuous vector space. In this paper, we
propose a search efficient binary network embedding algorithm called BinaryNE
to learn a sparse binary code for each node, by simultaneously modeling node
context relations and node attribute relations through a three-layer neural
network. BinaryNE learns binary node representations efficiently through a
stochastic gradient descent based online learning algorithm. The learned binary
encoding not only reduces memory usage to represent each node, but also allows
fast bit-wise comparisons to support much quicker network node search compared
to Euclidean distance or other distance measures. Our experiments and
comparisons show that BinaryNE not only delivers more than 23 times faster
search speed, but also provides comparable or better search quality than
traditional continuous vector based network embedding methods
The Minimum Wiener Connector
The Wiener index of a graph is the sum of all pairwise shortest-path
distances between its vertices. In this paper we study the novel problem of
finding a minimum Wiener connector: given a connected graph and a set
of query vertices, find a subgraph of that connects all
query vertices and has minimum Wiener index.
We show that The Minimum Wiener Connector admits a polynomial-time (albeit
impractical) exact algorithm for the special case where the number of query
vertices is bounded. We show that in general the problem is NP-hard, and has no
PTAS unless . Our main contribution is a
constant-factor approximation algorithm running in time
.
A thorough experimentation on a large variety of real-world graphs confirms
that our method returns smaller and denser solutions than other methods, and
does so by adding to the query set a small number of important vertices
(i.e., vertices with high centrality).Comment: Published in Proceedings of the 2015 ACM SIGMOD International
Conference on Management of Dat
큰 그래프 상에서의 개인화된 페이지 랭크에 대한 빠른 계산 기법
학위논문 (박사) -- 서울대학교 대학원 : 공과대학 전기·컴퓨터공학부, 2020. 8. 이상구.Computation of Personalized PageRank (PPR) in graphs is an important function that is widely utilized in myriad application domains such as search, recommendation, and knowledge discovery. Because the computation of PPR is an expensive process, a good number of innovative and efficient algorithms for computing PPR have been developed. However, efficient computation of PPR within very large graphs with over millions of nodes is still an open problem. Moreover, previously proposed algorithms cannot handle updates efficiently, thus, severely limiting their capability of handling dynamic graphs. In this paper, we present a fast converging algorithm that guarantees high and controlled precision. We improve the convergence rate of traditional Power Iteration method by adopting successive over-relaxation, and initial guess revision, a vector reuse strategy. The proposed method vastly improves on the traditional Power Iteration in terms of convergence rate and computation time, while retaining its simplicity and strictness. Since it can reuse the previously computed vectors for refreshing PPR vectors, its update performance is also greatly enhanced. Also, since the algorithm halts as soon as it reaches a given error threshold, we can flexibly control the trade-off between accuracy and time, a feature lacking in both sampling-based approximation methods and fully exact methods. Experiments show that the proposed algorithm is at least 20 times faster than the Power Iteration and outperforms other state-of-the-art algorithms.그래프
내에서 개인화된 페이지랭크 (P ersonalized P age R ank, PPR 를 계산하는 것은 검색 , 추천 , 지식발견 등 여러 분야에서 광범위하게 활용되는 중요한 작업 이다 . 개인화된 페이지랭크를 계산하는 것은 고비용의 과정이 필요하므로 , 개인화된 페이지랭크를 계산하는 효율적이고 혁신적인 방법들이 다수 개발되어왔다 . 그러나 수백만 이상의 노드를 가진 대용량 그래프에 대한 효율적인 계산은 여전히 해결되지 않은 문제이다 . 그에 더하여 , 기존 제시된 알고리듬들은 그래프 갱신을 효율적으로 다루지 못하여 동적으로 변화하는 그래프를 다루는 데에 한계점이 크다 . 본 연구에서는 높은 정밀도를 보장하고 정밀도를 통제 가능한 , 빠르게 수렴하는 개인화된 페이지랭크 계산 알고리듬을 제시한다 . 전통적인 거듭제곱법 (Power 에 축차가속완화법 (Successive Over Relaxation) 과 초기 추측 값 보정법 (Initial Guess 을 활용한 벡터 재사용 전략을 적용하여 수렴 속도를 개선하였다 . 제시된 방법은 기존 거듭제곱법의 장점인 단순성과 엄밀성을 유지 하면서 도 수렴율과 계산속도를 크게 개선 한다 . 또한 개인화된 페이지랭크 벡터의 갱신을 위하여 이전에 계산 되어 저장된 벡터를 재사용하 여 , 갱신 에 드는 시간이 크게 단축된다 . 본 방법은 주어진 오차 한계에 도달하는 즉시 결과값을 산출하므로 정확도와 계산시간을 유연하게 조절할 수 있으며 이는 표본 기반 추정방법이나 정확한 값을 산출하는 역행렬 기반 방법 이 가지지 못한 특성이다 . 실험 결과 , 본 방법은 거듭제곱법에 비하여 20 배 이상 빠르게 수렴한다는 것이 확인되었으며 , 기 제시된 최고 성능 의 알고리 듬 보다 우수한 성능을 보이는 것 또한 확인되었다1 Introduction 1
2 Preliminaries: Personalized PageRank 4
2.1 Random Walk, PageRank, and Personalized PageRank. 5
2.1.1 Basics on Random Walk 5
2.1.2 PageRank. 6
2.1.3 Personalized PageRank 8
2.2 Characteristics of Personalized PageRank. 9
2.3 Applications of Personalized PageRank. 12
2.4 Previous Work on Personalized PageRank Computation. 17
2.4.1 Basic Algorithms 17
2.4.2 Enhanced Power Iteration 18
2.4.3 Bookmark Coloring Algorithm. 20
2.4.4 Dynamic Programming 21
2.4.5 Monte-Carlo Sampling. 22
2.4.6 Enhanced Direct Solving 24
2.5 Summary 26
3 Personalized PageRank Computation with Initial Guess Revision 30
3.1 Initial Guess Revision and Relaxation 30
3.2 Finding Optimal Weight of Successive Over Relaxation for PPR. 34
3.3 Initial Guess Construction Algorithm for Personalized PageRank. 36
4 Fully Personalized PageRank Algorithm with Initial Guess Revision 42
4.1 FPPR with IGR. 42
4.2 Optimization. 49
4.3 Experiments. 52
5 Personalized PageRank Query Processing with Initial Guess Revision 56
5.1 PPR Query Processing with IGR 56
5.2 Optimization. 64
5.3 Experiments. 67
6 Conclusion 74
Bibliography 77
Appendix 88
Abstract (In Korean) 90Docto
Monte Carlo Methods for Top-k Personalized PageRank Lists and Name Disambiguation
We study a problem of quick detection of top-k Personalized PageRank lists.
This problem has a number of important applications such as finding local cuts
in large graphs, estimation of similarity distance and name disambiguation. In
particular, we apply our results to construct efficient algorithms for the
person name disambiguation problem. We argue that when finding top-k
Personalized PageRank lists two observations are important. Firstly, it is
crucial that we detect fast the top-k most important neighbours of a node,
while the exact order in the top-k list as well as the exact values of PageRank
are by far not so crucial. Secondly, a little number of wrong elements in top-k
lists do not really degrade the quality of top-k lists, but it can lead to
significant computational saving. Based on these two key observations we
propose Monte Carlo methods for fast detection of top-k Personalized PageRank
lists. We provide performance evaluation of the proposed methods and supply
stopping criteria. Then, we apply the methods to the person name disambiguation
problem. The developed algorithm for the person name disambiguation problem has
achieved the second place in the WePS 2010 competition
Efficient Estimation of Heat Kernel PageRank for Local Clustering
Given an undirected graph G and a seed node s, the local clustering problem
aims to identify a high-quality cluster containing s in time roughly
proportional to the size of the cluster, regardless of the size of G. This
problem finds numerous applications on large-scale graphs. Recently, heat
kernel PageRank (HKPR), which is a measure of the proximity of nodes in graphs,
is applied to this problem and found to be more efficient compared with prior
methods. However, existing solutions for computing HKPR either are
prohibitively expensive or provide unsatisfactory error approximation on HKPR
values, rendering them impractical especially on billion-edge graphs.
In this paper, we present TEA and TEA+, two novel local graph clustering
algorithms based on HKPR, to address the aforementioned limitations.
Specifically, these algorithms provide non-trivial theoretical guarantees in
relative error of HKPR values and the time complexity. The basic idea is to
utilize deterministic graph traversal to produce a rough estimation of exact
HKPR vector, and then exploit Monte-Carlo random walks to refine the results in
an optimized and non-trivial way. In particular, TEA+ offers practical
efficiency and effectiveness due to non-trivial optimizations. Extensive
experiments on real-world datasets demonstrate that TEA+ outperforms the
state-of-the-art algorithm by more than four times on most benchmark datasets
in terms of computational time when achieving the same clustering quality, and
in particular, is an order of magnitude faster on large graphs including the
widely studied Twitter and Friendster datasets.Comment: The technical report for the full research paper accepted in the
SIGMOD 201
- …