18 research outputs found
Smoothed Analysis of Dynamic Networks
We generalize the technique of smoothed analysis to distributed algorithms in
dynamic network models. Whereas standard smoothed analysis studies the impact
of small random perturbations of input values on algorithm performance metrics,
dynamic graph smoothed analysis studies the impact of random perturbations of
the underlying changing network graph topologies. Similar to the original
application of smoothed analysis, our goal is to study whether known strong
lower bounds in dynamic network models are robust or fragile: do they withstand
small (random) perturbations, or do such deviations push the graphs far enough
from a precise pathological instance to enable much better performance? Fragile
lower bounds are likely not relevant for real-world deployment, while robust
lower bounds represent a true difficulty caused by dynamic behavior. We apply
this technique to three standard dynamic network problems with known strong
worst-case lower bounds: random walks, flooding, and aggregation. We prove that
these bounds provide a spectrum of robustness when subjected to
smoothing---some are extremely fragile (random walks), some are moderately
fragile / robust (flooding), and some are extremely robust (aggregation).Comment: 20 page
Storage and Search in Dynamic Peer-to-Peer Networks
We study robust and efficient distributed algorithms for searching, storing,
and maintaining data in dynamic Peer-to-Peer (P2P) networks. P2P networks are
highly dynamic networks that experience heavy node churn (i.e., nodes join and
leave the network continuously over time). Our goal is to guarantee, despite
high node churn rate, that a large number of nodes in the network can store,
retrieve, and maintain a large number of data items. Our main contributions are
fast randomized distributed algorithms that guarantee the above with high
probability (whp) even under high adversarial churn:
1. A randomized distributed search algorithm that (whp) guarantees that
searches from as many as nodes ( is the stable network size)
succeed in -rounds despite churn, for
any small constant , per round. We assume that the churn is
controlled by an oblivious adversary (that has complete knowledge and control
of what nodes join and leave and at what time, but is oblivious to the random
choices made by the algorithm).
2. A storage and maintenance algorithm that guarantees (whp) data items can
be efficiently stored (with only copies of each data item)
and maintained in a dynamic P2P network with churn rate up to
per round. Our search algorithm together with our
storage and maintenance algorithm guarantees that as many as nodes
can efficiently store, maintain, and search even under churn per round. Our algorithms require only polylogarithmic in bits to
be processed and sent (per round) by each node.
To the best of our knowledge, our algorithms are the first-known,
fully-distributed storage and search algorithms that provably work under highly
dynamic settings (i.e., high churn rates per step).Comment: to appear at SPAA 201
Tiny Groups Tackle Byzantine Adversaries
A popular technique for tolerating malicious faults in open distributed
systems is to establish small groups of participants, each of which has a
non-faulty majority. These groups are used as building blocks to design
attack-resistant algorithms.
Despite over a decade of active research, current constructions require group
sizes of , where is the number of participants in the system.
This group size is important since communication and state costs scale
polynomially with this parameter. Given the stubbornness of this logarithmic
barrier, a natural question is whether better bounds are possible.
Here, we consider an attacker that controls a constant fraction of the total
computational resources in the system. By leveraging proof-of-work (PoW), we
demonstrate how to reduce the group size exponentially to while
maintaining strong security guarantees. This reduction in group size yields a
significant improvement in communication and state costs.Comment: This work is supported by the National Science Foundation grant CCF
1613772 and a C Spire Research Gif
On-demand Bandwidth and Stability Based Unicast Routing in Mobile Adhoc Networks
Characteristics of mobile ad hoc networks (MANETs) such as lack of central coordination, dynamic topology and limited resources pose a challenging problem in quality of service (QoS) routing. Providing an efficient, robust and low overhead QoS unicast route from source to destination is a critical issue. Bandwidth and route stability are the major important QoS parameters for applications where long duration connections are required with stringent bandwidth requirements for multimedia applications. This paper proposes an On-demand Bandwidth and Stability based Unicast Routing scheme (OBSUR) in MANET by adding additional QoS features to existing Dynamic Source Routing (DSR) protocol. The objective of the OBSUR is to provide QoS satisfied, reliable and robust route for communicating nodes. The scheme works in following steps. (1) Each node in the network periodically (small regular intervals) estimates bandwidth availability, node and link stability, buffer availability, and stability factor between nodes. (2) Construction of neighbor stability and QoS database at every node which is used in route establishment process. (3) The unicast path is constructed by using route request and route reply packets with the help of route information cache, and (4) route maintenance in case of node mobility and route failures. Simulation results show that there is an improvement in terms of traffic admission ratio, control overhead, packet delivery ratio, end to end delay and throughput as compared to Route Stability Based QoS Routing (RSQR) in MANETs.
Distributed Approximation Algorithms for Weighted Shortest Paths
A distributed network is modeled by a graph having nodes (processors) and
diameter . We study the time complexity of approximating {\em weighted}
(undirected) shortest paths on distributed networks with a {\em
bandwidth restriction} on edges (the standard synchronous \congest model). The
question whether approximation algorithms help speed up the shortest paths
(more precisely distance computation) was raised since at least 2004 by Elkin
(SIGACT News 2004). The unweighted case of this problem is well-understood
while its weighted counterpart is fundamental problem in the area of
distributed approximation algorithms and remains widely open. We present new
algorithms for computing both single-source shortest paths (\sssp) and
all-pairs shortest paths (\apsp) in the weighted case.
Our main result is an algorithm for \sssp. Previous results are the classic
-time Bellman-Ford algorithm and an -time
-approximation algorithm, for any integer
, which follows from the result of Lenzen and Patt-Shamir (STOC 2013).
(Note that Lenzen and Patt-Shamir in fact solve a harder problem, and we use
to hide the O(\poly\log n) term.) We present an -time -approximation algorithm for \sssp. This
algorithm is {\em sublinear-time} as long as is sublinear, thus yielding a
sublinear-time algorithm with almost optimal solution. When is small, our
running time matches the lower bound of by Das Sarma
et al. (SICOMP 2012), which holds even when , up to a
\poly\log n factor.Comment: Full version of STOC 201