665 research outputs found
Exact and Approximate Digraph Bandwidth
In this paper, we introduce a directed variant of the classical Bandwidth problem and study it from the view-point of moderately exponential time algorithms, both exactly and approximately. Motivated by the definitions of the directed variants of the classical Cutwidth and Pathwidth problems, we define Digraph Bandwidth as follows. Given a digraph D and an ordering sigma of its vertices, the digraph bandwidth of sigma with respect to D is equal to the maximum value of sigma(v)-sigma(u) over all arcs (u,v) of D going forward along sigma (that is, when sigma(u) < sigma (v)). The Digraph Bandwidth problem takes as input a digraph D and asks to output an ordering with the minimum digraph bandwidth. The undirected Bandwidth easily reduces to Digraph Bandwidth and thus, it immediately implies that Directed Bandwidth is {NP-hard}. While an O^*(n!) time algorithm for the problem is trivial, the goal of this paper is to design algorithms for Digraph Bandwidth which have running times of the form 2^O(n). In particular, we obtain the following results. Here, n and m denote the number of vertices and arcs of the input digraph D, respectively.
- Digraph Bandwidth can be solved in O^*(3^n * 2^m) time. This result implies a 2^O(n) time algorithm on sparse graphs, such as graphs of bounded average degree.
- Let G be the underlying undirected graph of the input digraph. If the treewidth of G is at most t, then Digraph Bandwidth can be solved in time O^*(2^(n + (t+2) log n)). This result implies a 2^(n+O(sqrt(n) log n)) algorithm for directed planar graphs and, in general, for the class of digraphs whose underlying undirected graph excludes some fixed graph H as a minor.
- Digraph Bandwidth can be solved in min{O^*(4^n * b^n), O^*(4^n * 2^(b log b log n))} time, where b denotes the optimal digraph bandwidth of D. This allow us to deduce a 2^O(n) algorithm in many cases, for example when b <= n/(log^2n).
- Finally, we give a (Single) Exponential Time Approximation Scheme for Digraph Bandwidth. In particular, we show that for any fixed real epsilon > 0, we can find an ordering whose digraph bandwidth is at most (1+epsilon) times the optimal digraph bandwidth, in time O^*(4^n * (ceil[4/epsilon])^n)
Embedding large subgraphs into dense graphs
What conditions ensure that a graph G contains some given spanning subgraph
H? The most famous examples of results of this kind are probably Dirac's
theorem on Hamilton cycles and Tutte's theorem on perfect matchings. Perfect
matchings are generalized by perfect F-packings, where instead of covering all
the vertices of G by disjoint edges, we want to cover G by disjoint copies of a
(small) graph F. It is unlikely that there is a characterization of all graphs
G which contain a perfect F-packing, so as in the case of Dirac's theorem it
makes sense to study conditions on the minimum degree of G which guarantee a
perfect F-packing.
The Regularity lemma of Szemeredi and the Blow-up lemma of Komlos, Sarkozy
and Szemeredi have proved to be powerful tools in attacking such problems and
quite recently, several long-standing problems and conjectures in the area have
been solved using these. In this survey, we give an outline of recent progress
(with our main emphasis on F-packings, Hamiltonicity problems and tree
embeddings) and describe some of the methods involved
Hamilton cycles in sparse robustly expanding digraphs
The notion of robust expansion has played a central role in the solution of
several conjectures involving the packing of Hamilton cycles in graphs and
directed graphs. These and other results usually rely on the fact that every
robustly expanding (di)graph with suitably large minimum degree contains a
Hamilton cycle. Previous proofs of this require Szemer\'edi's Regularity Lemma
and so this fact can only be applied to dense, sufficiently large robust
expanders. We give a proof that does not use the Regularity Lemma and, indeed,
we can apply our result to suitable sparse robustly expanding digraphs.Comment: Accepted for publication in The Electronic Journal of Combinatoric
Hamilton decompositions of regular expanders: applications
In a recent paper, we showed that every sufficiently large regular digraph G
on n vertices whose degree is linear in n and which is a robust outexpander has
a decomposition into edge-disjoint Hamilton cycles. The main consequence of
this theorem is that every regular tournament on n vertices can be decomposed
into (n-1)/2 edge-disjoint Hamilton cycles, whenever n is sufficiently large.
This verified a conjecture of Kelly from 1968. In this paper, we derive a
number of further consequences of our result on robust outexpanders, the main
ones are the following: (i) an undirected analogue of our result on robust
outexpanders; (ii) best possible bounds on the size of an optimal packing of
edge-disjoint Hamilton cycles in a graph of minimum degree d for a large range
of values for d. (iii) a similar result for digraphs of given minimum
semidegree; (iv) an approximate version of a conjecture of Nash-Williams on
Hamilton decompositions of dense regular graphs; (v) the observation that dense
quasi-random graphs are robust outexpanders; (vi) a verification of the `very
dense' case of a conjecture of Frieze and Krivelevich on packing edge-disjoint
Hamilton cycles in random graphs; (vii) a proof of a conjecture of Erdos on the
size of an optimal packing of edge-disjoint Hamilton cycles in a random
tournament.Comment: final version, to appear in J. Combinatorial Theory
Distributed Optimization via Gradient Descent with Event-Triggered Zooming over Quantized Communication
In this paper, we study unconstrained distributed optimization strongly
convex problems, in which the exchange of information in the network is
captured by a directed graph topology over digital channels that have limited
capacity (and hence information should be quantized). Distributed methods in
which nodes use quantized communication yield a solution at the proximity of
the optimal solution, hence reaching an error floor that depends on the
quantization level used; the finer the quantization the lower the error floor.
However, it is not possible to determine in advance the optimal quantization
level that ensures specific performance guarantees (such as achieving an error
floor below a predefined threshold). Choosing a very small quantization level
that would guarantee the desired performance, requires {information} packets of
very large size, which is not desirable (could increase the probability of
packet losses, increase delays, etc) and often not feasible due to the limited
capacity of the channels available. In order to obtain a
communication-efficient distributed solution and a sufficiently close proximity
to the optimal solution, we propose a quantized distributed optimization
algorithm that converges in a finite number of steps and is able to adjust the
quantization level accordingly. The proposed solution uses a finite-time
distributed optimization protocol to find a solution to the problem for a given
quantization level in a finite number of steps and keeps refining the
quantization level until the difference in the solution between two successive
solutions with different quantization levels is below a certain pre-specified
threshold
Finite-Time Distributed Optimization with Quantized Gradient Descent
In this paper, we consider the unconstrained distributed optimization
problem, in which the exchange of information in the network is captured by a
directed graph topology, and thus nodes can send information to their
out-neighbors only. Additionally, the communication channels among the nodes
have limited bandwidth, to alleviate the limitation, quantized messages should
be exchanged among the nodes. For solving the distributed optimization problem,
we combine a distributed quantized consensus algorithm (which requires the
nodes to exchange quantized messages and converges in a finite number of steps)
with a gradient descent method. Specifically, at every optimization step, each
node performs a gradient descent step (i.e., subtracts the scaled gradient from
its current estimate), and then performs a finite-time calculation of the
quantized average of every node's estimate in the network. As a consequence,
this algorithm approximately mimics the centralized gradient descent algorithm.
The performance of the proposed algorithm is demonstrated via simple
illustrative examples
Algorithms for Constructing Overlay Networks For Live Streaming
We present a polynomial time approximation algorithm for constructing an
overlay multicast network for streaming live media events over the Internet.
The class of overlay networks constructed by our algorithm include networks
used by Akamai Technologies to deliver live media events to a global audience
with high fidelity. We construct networks consisting of three stages of nodes.
The nodes in the first stage are the entry points that act as sources for the
live streams. Each source forwards each of its streams to one or more nodes in
the second stage that are called reflectors. A reflector can split an incoming
stream into multiple identical outgoing streams, which are then sent on to
nodes in the third and final stage that act as sinks and are located in edge
networks near end-users. As the packets in a stream travel from one stage to
the next, some of them may be lost. A sink combines the packets from multiple
instances of the same stream (by reordering packets and discarding duplicates)
to form a single instance of the stream with minimal loss. Our primary
contribution is an algorithm that constructs an overlay network that provably
satisfies capacity and reliability constraints to within a constant factor of
optimal, and minimizes cost to within a logarithmic factor of optimal. Further
in the common case where only the transmission costs are minimized, we show
that our algorithm produces a solution that has cost within a factor of 2 of
optimal. We also implement our algorithm and evaluate it on realistic traces
derived from Akamai's live streaming network. Our empirical results show that
our algorithm can be used to efficiently construct large-scale overlay networks
in practice with near-optimal cost
Online Distributed Learning with Quantized Finite-Time Coordination
In this paper we consider online distributed learning problems. Online
distributed learning refers to the process of training learning models on
distributed data sources. In our setting a set of agents need to cooperatively
train a learning model from streaming data. Differently from federated
learning, the proposed approach does not rely on a central server but only on
peer-to-peer communications among the agents. This approach is often used in
scenarios where data cannot be moved to a centralized location due to privacy,
security, or cost reasons. In order to overcome the absence of a central
server, we propose a distributed algorithm that relies on a quantized,
finite-time coordination protocol to aggregate the locally trained models.
Furthermore, our algorithm allows for the use of stochastic gradients during
local training. Stochastic gradients are computed using a randomly sampled
subset of the local training data, which makes the proposed algorithm more
efficient and scalable than traditional gradient descent. In our paper, we
analyze the performance of the proposed algorithm in terms of the mean distance
from the online solution. Finally, we present numerical results for a logistic
regression task.Comment: To be presented at IEEE CDC'2
- …