2 research outputs found
Understanding Coarsening for Embedding Large-Scale Graphs
A significant portion of the data today, e.g, social networks, web
connections, etc., can be modeled by graphs. A proper analysis of graphs with
Machine Learning (ML) algorithms has the potential to yield far-reaching
insights into many areas of research and industry. However, the irregular
structure of graph data constitutes an obstacle for running ML tasks on graphs
such as link prediction, node classification, and anomaly detection. Graph
embedding is a compute-intensive process of representing graphs as a set of
vectors in a d-dimensional space, which in turn makes it amenable to ML tasks.
Many approaches have been proposed in the literature to improve the performance
of graph embedding, e.g., using distributed algorithms, accelerators, and
pre-processing techniques. Graph coarsening, which can be considered a
pre-processing step, is a structural approximation of a given, large graph with
a smaller one. As the literature suggests, the cost of embedding significantly
decreases when coarsening is employed. In this work, we thoroughly analyze the
impact of the coarsening quality on the embedding performance both in terms of
speed and accuracy. Our experiments with a state-of-the-art, fast graph
embedding tool show that there is an interplay between the coarsening decisions
taken and the embedding quality.Comment: 10 pages, 6 figures, submitted to 2020 IEEE International Conference
on Big Dat