228 research outputs found
An FPTAS for Bargaining Networks with Unequal Bargaining Powers
Bargaining networks model social or economic situations in which agents seek
to form the most lucrative partnership with another agent from among several
alternatives. There has been a flurry of recent research studying Nash
bargaining solutions (also called 'balanced outcomes') in bargaining networks,
so that we now know when such solutions exist, and also that they can be
computed efficiently, even by market agents behaving in a natural manner. In
this work we study a generalization of Nash bargaining, that models the
possibility of unequal 'bargaining powers'. This generalization was introduced
in [KB+10], where it was shown that the corresponding 'unequal division' (UD)
solutions exist if and only if Nash bargaining solutions exist, and also that a
certain local dynamics converges to UD solutions when they exist. However, the
bound on convergence time obtained for that dynamics was exponential in network
size for the unequal division case. This bound is tight, in the sense that
there exists instances on which the dynamics of [KB+10] converges only after
exponential time. Other approaches, such as the one of Kleinberg and Tardos, do
not generalize to the unsymmetrical case. Thus, the question of computational
tractability of UD solutions has remained open. In this paper, we provide an
FPTAS for the computation of UD solutions, when such solutions exist. On a
graph G=(V,E) with weights (i.e. pairwise profit opportunities) uniformly
bounded above by 1, our FPTAS finds an \eps-UD solution in time
poly(|V|,1/\eps). We also provide a fast local algorithm for finding \eps-UD
solution, providing further justification that a market can find such a
solution.Comment: 18 pages; Amin Saberi (Ed.): Internet and Network Economics - 6th
International Workshop, WINE 2010, Stanford, CA, USA, December 13-17, 2010.
Proceedings
Majority dynamics on trees and the dynamic cavity method
A voter sits on each vertex of an infinite tree of degree , and has to
decide between two alternative opinions. At each time step, each voter switches
to the opinion of the majority of her neighbors. We analyze this majority
process when opinions are initialized to independent and identically
distributed random variables. In particular, we bound the threshold value of
the initial bias such that the process converges to consensus. In order to
prove an upper bound, we characterize the process of a single node in the large
-limit. This approach is inspired by the theory of mean field spin-glass and
can potentially be generalized to a wider class of models. We also derive a
lower bound that is nontrivial for small, odd values of .Comment: Published in at http://dx.doi.org/10.1214/10-AAP729 the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org
On Distributed Computation in Noisy Random Planar Networks
We consider distributed computation of functions of distributed data in
random planar networks with noisy wireless links. We present a new algorithm
for computation of the maximum value which is order optimal in the number of
transmissions and computation time.We also adapt the histogram computation
algorithm of Ying et al to make the histogram computation time optimal.Comment: 5 pages, 2 figure
Efficient Bayesian Social Learning on Trees
We consider a set of agents who are attempting to iteratively learn the
'state of the world' from their neighbors in a social network. Each agent
initially receives a noisy observation of the true state of the world. The
agents then repeatedly 'vote' and observe the votes of some of their peers,
from which they gain more information. The agents' calculations are Bayesian
and aim to myopically maximize the expected utility at each iteration.
This model, introduced by Gale and Kariv (2003), is a natural approach to
learning on networks. However, it has been criticized, chiefly because the
agents' decision rule appears to become computationally intractable as the
number of iterations advances. For instance, a dynamic programming approach
(part of this work) has running time that is exponentially large in \min(n,
(d-1)^t), where n is the number of agents.
We provide a new algorithm to perform the agents' computations on locally
tree-like graphs. Our algorithm uses the dynamic cavity method to drastically
reduce computational effort. Let d be the maximum degree and t be the iteration
number. The computational effort needed per agent is exponential only in O(td)
(note that the number of possible information sets of a neighbor at time t is
itself exponential in td).
Under appropriate assumptions on the rate of convergence, we deduce that each
agent is only required to spend polylogarithmic (in 1/\eps) computational
effort to approximately learn the true state of the world with error
probability \eps, on regular trees of degree at least five. We provide
numerical and other evidence to justify our assumption on convergence rate.
We extend our results in various directions, including loopy graphs. Our
results indicate efficiency of iterative Bayesian social learning in a wide
range of situations, contrary to widely held beliefs.Comment: 11 pages, 1 figure, submitte
The size of the core in assignment markets
Assignment markets involve matching with transfers, as in labor markets and
housing markets. We consider a two-sided assignment market with agent types and
stochastic structure similar to models used in empirical studies, and
characterize the size of the core in such markets. Each agent has a randomly
drawn productivity with respect to each type of agent on the other side. The
value generated from a match between a pair of agents is the sum of the two
productivity terms, each of which depends only on the type but not the identity
of one of the agents, and a third deterministic term driven by the pair of
types. We allow the number of agents to grow, keeping the number of agent types
fixed. Let be the number of agents and be the number of types on the
side of the market with more types. We find, under reasonable assumptions, that
the relative variation in utility per agent over core outcomes is bounded as
, where polylogarithmic factors have been suppressed. Further,
we show that this bound is tight in worst case. We also provide a tighter bound
under more restrictive assumptions. Our results provide partial justification
for the typical assumption of a unique core outcome in empirical studies
- …