4,077 research outputs found
GraphVite: A High-Performance CPU-GPU Hybrid System for Node Embedding
Learning continuous representations of nodes is attracting growing interest
in both academia and industry recently, due to their simplicity and
effectiveness in a variety of applications. Most of existing node embedding
algorithms and systems are capable of processing networks with hundreds of
thousands or a few millions of nodes. However, how to scale them to networks
that have tens of millions or even hundreds of millions of nodes remains a
challenging problem. In this paper, we propose GraphVite, a high-performance
CPU-GPU hybrid system for training node embeddings, by co-optimizing the
algorithm and the system. On the CPU end, augmented edge samples are parallelly
generated by random walks in an online fashion on the network, and serve as the
training data. On the GPU end, a novel parallel negative sampling is proposed
to leverage multiple GPUs to train node embeddings simultaneously, without much
data transfer and synchronization. Moreover, an efficient collaboration
strategy is proposed to further reduce the synchronization cost between CPUs
and GPUs. Experiments on multiple real-world networks show that GraphVite is
super efficient. It takes only about one minute for a network with 1 million
nodes and 5 million edges on a single machine with 4 GPUs, and takes around 20
hours for a network with 66 million nodes and 1.8 billion edges. Compared to
the current fastest system, GraphVite is about 50 times faster without any
sacrifice on performance.Comment: accepted at WWW 201
A Relational Hyperlink Analysis of an Online Social Movement
In this paper we propose relational hyperlink analysis (RHA) as a distinct approach for empirical social science research into hyperlink networks on the World Wide Web. We demonstrate this approach, which employs the ideas and techniques of social network analysis (in particular, exponential random graph modeling), in a study of the hyperlinking behaviors of Australian asylum advocacy groups. We show that compared with the commonly-used hyperlink counts regression approach, relational hyperlink analysis can lead to fundamentally different conclusions about the social processes underpinning hyperlinking behavior. In particular, in trying to understand why social ties are formed, counts regressions may over-estimate the role of actor attributes in the formation of hyperlinks when endogenous, purely structural network effects are not taken into account. Our analysis involves an innovative joint use of two software programs: VOSON, for the automated retrieval and processing of considerable quantities of hyperlink data, and LPNet, for the statistical modeling of social network data. Together, VOSON and LPNet enable new and unique research into social networks in the online world, and our paper highlights the importance of complementary research tools for social science research into the web
Mining Missing Hyperlinks from Human Navigation Traces: A Case Study of Wikipedia
Hyperlinks are an essential feature of the World Wide Web. They are
especially important for online encyclopedias such as Wikipedia: an article can
often only be understood in the context of related articles, and hyperlinks
make it easy to explore this context. But important links are often missing,
and several methods have been proposed to alleviate this problem by learning a
linking model based on the structure of the existing links. Here we propose a
novel approach to identifying missing links in Wikipedia. We build on the fact
that the ultimate purpose of Wikipedia links is to aid navigation. Rather than
merely suggesting new links that are in tune with the structure of existing
links, our method finds missing links that would immediately enhance
Wikipedia's navigability. We leverage data sets of navigation paths collected
through a Wikipedia-based human-computation game in which users must find a
short path from a start to a target article by only clicking links encountered
along the way. We harness human navigational traces to identify a set of
candidates for missing links and then rank these candidates. Experiments show
that our procedure identifies missing links of high quality
Uncovering missing links with cold ends
To evaluate the performance of prediction of missing links, the known data
are randomly divided into two parts, the training set and the probe set. We
argue that this straightforward and standard method may lead to terrible bias,
since in real biological and information networks, missing links are more
likely to be links connecting low-degree nodes. We therefore study how to
uncover missing links with low-degree nodes, namely links in the probe set are
of lower degree products than a random sampling. Experimental analysis on ten
local similarity indices and four disparate real networks reveals a surprising
result that the Leicht-Holme-Newman index [E. A. Leicht, P. Holme, and M. E. J.
Newman, Phys. Rev. E 73, 026120 (2006)] performs the best, although it was
known to be one of the worst indices if the probe set is a random sampling of
all links. We further propose an parameter-dependent index, which considerably
improves the prediction accuracy. Finally, we show the relevance of the
proposed index on three real sampling methods.Comment: 16 pages, 5 figures, 6 table
Discriminative Nonparametric Latent Feature Relational Models with Data Augmentation
We present a discriminative nonparametric latent feature relational model
(LFRM) for link prediction to automatically infer the dimensionality of latent
features. Under the generic RegBayes (regularized Bayesian inference)
framework, we handily incorporate the prediction loss with probabilistic
inference of a Bayesian model; set distinct regularization parameters for
different types of links to handle the imbalance issue in real networks; and
unify the analysis of both the smooth logistic log-loss and the piecewise
linear hinge loss. For the nonconjugate posterior inference, we present a
simple Gibbs sampler via data augmentation, without making restricting
assumptions as done in variational methods. We further develop an approximate
sampler using stochastic gradient Langevin dynamics to handle large networks
with hundreds of thousands of entities and millions of links, orders of
magnitude larger than what existing LFRM models can process. Extensive studies
on various real networks show promising performance.Comment: Accepted by AAAI 201
- …