2,287 research outputs found
Network Essence: PageRank Completion and Centrality-Conforming Markov Chains
Ji\v{r}\'i Matou\v{s}ek (1963-2015) had many breakthrough contributions in
mathematics and algorithm design. His milestone results are not only profound
but also elegant. By going beyond the original objects --- such as Euclidean
spaces or linear programs --- Jirka found the essence of the challenging
mathematical/algorithmic problems as well as beautiful solutions that were
natural to him, but were surprising discoveries to the field.
In this short exploration article, I will first share with readers my initial
encounter with Jirka and discuss one of his fundamental geometric results from
the early 1990s. In the age of social and information networks, I will then
turn the discussion from geometric structures to network structures, attempting
to take a humble step towards the holy grail of network science, that is to
understand the network essence that underlies the observed
sparse-and-multifaceted network data. I will discuss a simple result which
summarizes some basic algebraic properties of personalized PageRank matrices.
Unlike the traditional transitive closure of binary relations, the personalized
PageRank matrices take "accumulated Markovian closure" of network data. Some of
these algebraic properties are known in various contexts. But I hope featuring
them together in a broader context will help to illustrate the desirable
properties of this Markovian completion of networks, and motivate systematic
developments of a network theory for understanding vast and ubiquitous
multifaceted network data.Comment: In "A Journey Through Discrete Mathematics, A Tribute to Ji\v{r}\'i
Matou\v{s}ek", Editors Martin Loebl, Jaroslav Ne\v{s}et\v{r}il and Robin
Thomas, Springer International Publishing, 201
Games on the Sperner Triangle
We create a new two-player game on the Sperner Triangle based on Sperner's
lemma. Our game has simple rules and several desirable properties. First, the
game is always certain to have a winner. Second, like many other interesting
games such as Hex and Geography, we prove that deciding whether one can win our
game is a PSPACE-complete problem. Third, there is an elegant balance in the
game such that neither the first nor the second player always has a decisive
advantage. We provide a web-based version of the game, playable at:
http://cs-people.bu.edu/paithan/spernerGame/ . In addition we propose other
games, also based on fixed-point theorems.Comment: 18 pages, 19 figures. Uses paithan.st
Interplay between Social Influence and Network Centrality: A Comparative Study on Shapley Centrality and Single-Node-Influence Centrality
We study network centrality based on dynamic influence propagation models in
social networks. To illustrate our integrated mathematical-algorithmic approach
for understanding the fundamental interplay between dynamic influence processes
and static network structures, we focus on two basic centrality measures: (a)
Single Node Influence (SNI) centrality, which measures each node's significance
by its influence spread; and (b) Shapley Centrality, which uses the Shapley
value of the influence spread function --- formulated based on a fundamental
cooperative-game-theoretical concept --- to measure the significance of nodes.
We present a comprehensive comparative study of these two centrality measures.
Mathematically, we present axiomatic characterizations, which precisely capture
the essence of these two centrality measures and their fundamental differences.
Algorithmically, we provide scalable algorithms for approximating them for a
large family of social-influence instances. Empirically, we demonstrate their
similarity and differences in a number of real-world social networks, as well
as the efficiency of our scalable algorithms. Our results shed light on their
applicability: SNI centrality is suitable for assessing individual influence in
isolation while Shapley centrality assesses individuals' performance in group
influence settings.Comment: The 10-page extended abstract version appears in WWW'201
Paths Beyond Local Search: A Nearly Tight Bound for Randomized Fixed-Point Computation
In 1983, Aldous proved that randomization can speedup local search. For
example, it reduces the query complexity of local search over [1:n]^d from
Theta (n^{d-1}) to O (d^{1/2}n^{d/2}). It remains open whether randomization
helps fixed-point computation. Inspired by this open problem and recent
advances on equilibrium computation, we have been fascinated by the following
question:
Is a fixed-point or an equilibrium fundamentally harder to find than a local
optimum? In this paper, we give a nearly-tight bound of Omega(n)^{d-1} on the
randomized query complexity for computing a fixed point of a discrete Brouwer
function over [1:n]^d. Since the randomized query complexity of global
optimization over [1:n]^d is Theta (n^{d}), the randomized query model over
[1:n]^d strictly separates these three important search problems: Global
optimization is harder than fixed-point computation, and fixed-point
computation is harder than local search. Our result indeed demonstrates that
randomization does not help much in fixed-point computation in the query model;
the deterministic complexity of this problem is Theta (n^{d-1})
A Complexity View of Markets with Social Influence
In this paper, inspired by the work of Megiddo on the formation of
preferences and strategic analysis, we consider an early market model studied
in the field of economic theory, in which each trader's utility may be
influenced by the bundles of goods obtained by her social neighbors. The goal
of this paper is to understand and characterize the impact of social influence
on the complexity of computing and approximating market equilibria.
We present complexity-theoretic and algorithmic results for approximating
market equilibria in this model with focus on two concrete influence models
based on the traditional linear utility functions. Recall that an Arrow-Debreu
market equilibrium in a conventional exchange market with linear utility
functions can be computed in polynomial time by convex programming. Our
complexity results show that even a bounded-degree, planar influence network
can significantly increase the difficulty of equilibrium computation even in
markets with only a constant number of goods. Our algorithmic results suggest
that finding an approximate equilibrium in markets with hierarchical influence
networks might be easier than that in markets with arbitrary neighborhood
structures. By demonstrating a simple market with a constant number of goods
and a bounded-degree, planar influence graph whose equilibrium is PPAD-hard to
approximate, we also provide a counterexample to a common belief, which we
refer to as the myth of a constant number of goods, that equilibria in markets
with a constant number of goods are easy to compute or easy to approximate
Optimal Margin Distribution Machine
Support vector machine (SVM) has been one of the most popular learning
algorithms, with the central idea of maximizing the minimum margin, i.e., the
smallest distance from the instances to the classification boundary. Recent
theoretical results, however, disclosed that maximizing the minimum margin does
not necessarily lead to better generalization performances, and instead, the
margin distribution has been proven to be more crucial. Based on this idea, we
propose a new method, named Optimal margin Distribution Machine (ODM), which
tries to achieve a better generalization performance by optimizing the margin
distribution. We characterize the margin distribution by the first- and
second-order statistics, i.e., the margin mean and variance. The proposed
method is a general learning approach which can be used in any place where SVM
can be applied, and their superiority is verified both theoretically and
empirically in this paper.Comment: arXiv admin note: substantial text overlap with arXiv:1311.098
Solving Sparse, Symmetric, Diagonally-Dominant Linear Systems in Time
We present a linear-system solver that, given an -by- symmetric
positive semi-definite, diagonally dominant matrix with non-zero
entries and an -vector \bb , produces a vector \xxt within relative
distance of the solution to A \xx = \bb in time , where is the log of the
ratio of the largest to smallest non-zero eigenvalue of . In particular,
, where is the logarithm of the ratio
of the largest to smallest non-zero entry of . If the graph of has genus
or does not have a minor, then the exponent of
can be improved to the minimum of and .
The key contribution of our work is an extension of Vaidya's techniques for
constructing and analyzing combinatorial preconditioners.Comment: fixed a typo on page
Smoothed analysis of algorithms
Spielman and Teng introduced the smoothed analysis of algorithms to provide a
framework in which one could explain the success in practice of algorithms and
heuristics that could not be understood through the traditional worst-case and
average-case analyses. In this talk, we survey some of the smoothed analyses
that have been performed
Nearly-Linear Time Algorithms for Preconditioning and Solving Symmetric, Diagonally Dominant Linear Systems
We present a randomized algorithm that, on input a symmetric, weakly
diagonally dominant n-by-n matrix A with m nonzero entries and an n-vector b,
produces a y such that \norm{y - \pinv{A} b}_{A} \leq \epsilon \norm{\pinv{A}
b}_{A} in expected time for some constant
c. By applying this algorithm inside the inverse power method, we compute
approximate Fiedler vectors in a similar amount of time. The algorithm applies
subgraph preconditioners in a recursive fashion. These preconditioners improve
upon the subgraph preconditioners first introduced by Vaidya (1990).
For any symmetric, weakly diagonally-dominant matrix A with non-positive
off-diagonal entries and , we construct in time a
preconditioner B of A with at most nonzero
off-diagonal entries such that the finite generalized condition number
is at most k, for some other constant c.
In the special case when the nonzero structure of the matrix is planar the
corresponding linear system solver runs in expected time .
We hope that our introduction of algorithms of low asymptotic complexity will
lead to the development of algorithms that are also fast in practice.Comment: This revised version contains a new section in which we prove that it
suffices to carry out the computations with limited precisio
Smoothed Analysis of Interior-Point Algorithms: Termination
We perform a smoothed analysis of the termination phase of an interior-point
method. By combining this analysis with the smoothed analysis of Renegar's
interior-point algorithm by Dunagan, Spielman and Teng, we show that the
smoothed complexity of an interior-point algorithm for linear programming is . In contrast, the best known bound on the worst-case
complexity of linear programming is , where could be as large
as . We include an introduction to smoothed analysis and a tutorial on proof
techniques that have been useful in smoothed analyses.Comment: to be presented at the 2003 International Symposium on Mathematical
Programmin
- …