11,975 research outputs found
Fast matrix computations for pair-wise and column-wise commute times and Katz scores
We first explore methods for approximating the commute time and Katz score
between a pair of nodes. These methods are based on the approach of matrices,
moments, and quadrature developed in the numerical linear algebra community.
They rely on the Lanczos process and provide upper and lower bounds on an
estimate of the pair-wise scores. We also explore methods to approximate the
commute times and Katz scores from a node to all other nodes in the graph.
Here, our approach for the commute times is based on a variation of the
conjugate gradient algorithm, and it provides an estimate of all the diagonals
of the inverse of a matrix. Our technique for the Katz scores is based on
exploiting an empirical localization property of the Katz matrix. We adopt
algorithms used for personalized PageRank computing to these Katz scores and
theoretically show that this approach is convergent. We evaluate these methods
on 17 real world graphs ranging in size from 1000 to 1,000,000 nodes. Our
results show that our pair-wise commute time method and column-wise Katz
algorithm both have attractive theoretical properties and empirical
performance.Comment: 35 pages, journal version of
http://dx.doi.org/10.1007/978-3-642-18009-5_13 which has been submitted for
publication. Please see
http://www.cs.purdue.edu/homes/dgleich/publications/2011/codes/fast-katz/ for
supplemental code
Discriminative Distance-Based Network Indices with Application to Link Prediction
In large networks, using the length of shortest paths as the distance measure
has shortcomings. A well-studied shortcoming is that extending it to
disconnected graphs and directed graphs is controversial. The second
shortcoming is that a huge number of vertices may have exactly the same score.
The third shortcoming is that in many applications, the distance between two
vertices not only depends on the length of shortest paths, but also on the
number of shortest paths. In this paper, first we develop a new distance
measure between vertices of a graph that yields discriminative distance-based
centrality indices. This measure is proportional to the length of shortest
paths and inversely proportional to the number of shortest paths. We present
algorithms for exact computation of the proposed discriminative indices.
Second, we develop randomized algorithms that precisely estimate average
discriminative path length and average discriminative eccentricity and show
that they give -approximations of these indices. Third, we
perform extensive experiments over several real-world networks from different
domains. In our experiments, we first show that compared to the traditional
indices, discriminative indices have usually much more discriminability. Then,
we show that our randomized algorithms can very precisely estimate average
discriminative path length and average discriminative eccentricity, using only
few samples. Then, we show that real-world networks have usually a tiny average
discriminative path length, bounded by a constant (e.g., 2). Fourth, in order
to better motivate the usefulness of our proposed distance measure, we present
a novel link prediction method, that uses discriminative distance to decide
which vertices are more likely to form a link in future, and show its superior
performance compared to the well-known existing measures
Towards Fast Computation of Certified Robustness for ReLU Networks
Verifying the robustness property of a general Rectified Linear Unit (ReLU)
network is an NP-complete problem [Katz, Barrett, Dill, Julian and Kochenderfer
CAV17]. Although finding the exact minimum adversarial distortion is hard,
giving a certified lower bound of the minimum distortion is possible. Current
available methods of computing such a bound are either time-consuming or
delivering low quality bounds that are too loose to be useful. In this paper,
we exploit the special structure of ReLU networks and provide two
computationally efficient algorithms Fast-Lin and Fast-Lip that are able to
certify non-trivial lower bounds of minimum distortions, by bounding the ReLU
units with appropriate linear functions Fast-Lin, or by bounding the local
Lipschitz constant Fast-Lip. Experiments show that (1) our proposed methods
deliver bounds close to (the gap is 2-3X) exact minimum distortion found by
Reluplex in small MNIST networks while our algorithms are more than 10,000
times faster; (2) our methods deliver similar quality of bounds (the gap is
within 35% and usually around 10%; sometimes our bounds are even better) for
larger networks compared to the methods based on solving linear programming
problems but our algorithms are 33-14,000 times faster; (3) our method is
capable of solving large MNIST and CIFAR networks up to 7 layers with more than
10,000 neurons within tens of seconds on a single CPU core.
In addition, we show that, in fact, there is no polynomial time algorithm
that can approximately find the minimum adversarial distortion of a
ReLU network with a approximation ratio unless
=, where is the number of neurons in the network.Comment: Tsui-Wei Weng and Huan Zhang contributed equall
Structure of Heterogeneous Networks
Heterogeneous networks play a key role in the evolution of communities and
the decisions individuals make. These networks link different types of
entities, for example, people and the events they attend. Network analysis
algorithms usually project such networks unto simple graphs composed of
entities of a single type. In the process, they conflate relations between
entities of different types and loose important structural information. We
develop a mathematical framework that can be used to compactly represent and
analyze heterogeneous networks that combine multiple entity and link types. We
generalize Bonacich centrality, which measures connectivity between nodes by
the number of paths between them, to heterogeneous networks and use this
measure to study network structure. Specifically, we extend the popular
modularity-maximization method for community detection to use this centrality
metric. We also rank nodes based on their connectivity to other nodes. One
advantage of this centrality metric is that it has a tunable parameter we can
use to set the length scale of interactions. By studying how rankings change
with this parameter allows us to identify important nodes in the network. We
apply the proposed method to analyze the structure of several heterogeneous
networks. We show that exploiting additional sources of evidence corresponding
to links between, as well as among, different entity types yields new insights
into network structure
Multi-Scale Link Prediction
The automated analysis of social networks has become an important problem due
to the proliferation of social networks, such as LiveJournal, Flickr and
Facebook. The scale of these social networks is massive and continues to grow
rapidly. An important problem in social network analysis is proximity
estimation that infers the closeness of different users. Link prediction, in
turn, is an important application of proximity estimation. However, many
methods for computing proximity measures have high computational complexity and
are thus prohibitive for large-scale link prediction problems. One way to
address this problem is to estimate proximity measures via low-rank
approximation. However, a single low-rank approximation may not be sufficient
to represent the behavior of the entire network. In this paper, we propose
Multi-Scale Link Prediction (MSLP), a framework for link prediction, which can
handle massive networks. The basis idea of MSLP is to construct low rank
approximations of the network at multiple scales in an efficient manner. Based
on this approach, MSLP combines predictions at multiple scales to make robust
and accurate predictions. Experimental results on real-life datasets with more
than a million nodes show the superior performance and scalability of our
method.Comment: 20 pages, 10 figure
Network-based ranking in social systems: three challenges
Ranking algorithms are pervasive in our increasingly digitized societies,
with important real-world applications including recommender systems, search
engines, and influencer marketing practices. From a network science
perspective, network-based ranking algorithms solve fundamental problems
related to the identification of vital nodes for the stability and dynamics of
a complex system. Despite the ubiquitous and successful applications of these
algorithms, we argue that our understanding of their performance and their
applications to real-world problems face three fundamental challenges: (i)
Rankings might be biased by various factors; (2) their effectiveness might be
limited to specific problems; and (3) agents' decisions driven by rankings
might result in potentially vicious feedback mechanisms and unhealthy systemic
consequences. Methods rooted in network science and agent-based modeling can
help us to understand and overcome these challenges.Comment: Perspective article. 9 pages, 3 figure
Maiter: An Asynchronous Graph Processing Framework for Delta-based Accumulative Iterative Computation
Myriad of graph-based algorithms in machine learning and data mining require
parsing relational data iteratively. These algorithms are implemented in a
large-scale distributed environment in order to scale to massive data sets. To
accelerate these large-scale graph-based iterative computations, we propose
delta-based accumulative iterative computation (DAIC). Different from
traditional iterative computations, which iteratively update the result based
on the result from the previous iteration, DAIC updates the result by
accumulating the "changes" between iterations. By DAIC, we can process only the
"changes" to avoid the negligible updates. Furthermore, we can perform DAIC
asynchronously to bypass the high-cost synchronous barriers in heterogeneous
distributed environments. Based on the DAIC model, we design and implement an
asynchronous graph processing framework, Maiter. We evaluate Maiter on local
cluster as well as on Amazon EC2 Cloud. The results show that Maiter achieves
as much as 60x speedup over Hadoop and outperforms other state-of-the-art
frameworks.Comment: ScienceCloud 2012, TKDE 201
- …