2,721 research outputs found
Algorithms for Constructing Overlay Networks For Live Streaming
We present a polynomial time approximation algorithm for constructing an
overlay multicast network for streaming live media events over the Internet.
The class of overlay networks constructed by our algorithm include networks
used by Akamai Technologies to deliver live media events to a global audience
with high fidelity. We construct networks consisting of three stages of nodes.
The nodes in the first stage are the entry points that act as sources for the
live streams. Each source forwards each of its streams to one or more nodes in
the second stage that are called reflectors. A reflector can split an incoming
stream into multiple identical outgoing streams, which are then sent on to
nodes in the third and final stage that act as sinks and are located in edge
networks near end-users. As the packets in a stream travel from one stage to
the next, some of them may be lost. A sink combines the packets from multiple
instances of the same stream (by reordering packets and discarding duplicates)
to form a single instance of the stream with minimal loss. Our primary
contribution is an algorithm that constructs an overlay network that provably
satisfies capacity and reliability constraints to within a constant factor of
optimal, and minimizes cost to within a logarithmic factor of optimal. Further
in the common case where only the transmission costs are minimized, we show
that our algorithm produces a solution that has cost within a factor of 2 of
optimal. We also implement our algorithm and evaluate it on realistic traces
derived from Akamai's live streaming network. Our empirical results show that
our algorithm can be used to efficiently construct large-scale overlay networks
in practice with near-optimal cost
Exact two-terminal reliability of some directed networks
The calculation of network reliability in a probabilistic context has long
been an issue of practical and academic importance. Conventional approaches
(determination of bounds, sums of disjoint products algorithms, Monte Carlo
evaluations, studies of the reliability polynomials, etc.) only provide
approximations when the network's size increases, even when nodes do not fail
and all edges have the same reliability p. We consider here a directed, generic
graph of arbitrary size mimicking real-life long-haul communication networks,
and give the exact, analytical solution for the two-terminal reliability. This
solution involves a product of transfer matrices, in which individual
reliabilities of edges and nodes are taken into account. The special case of
identical edge and node reliabilities (p and rho, respectively) is addressed.
We consider a case study based on a commonly-used configuration, and assess the
influence of the edges being directed (or not) on various measures of network
performance. While the two-terminal reliability, the failure frequency and the
failure rate of the connection are quite similar, the locations of complex
zeros of the two-terminal reliability polynomials exhibit strong differences,
and various structure transitions at specific values of rho. The present work
could be extended to provide a catalog of exactly solvable networks in terms of
reliability, which could be useful as building blocks for new and improved
bounds, as well as benchmarks, in the general case
Computational complexity of impact size estimation forspreading processes on networks
Spreading processes on networks are often analyzed to understand how the outcome of the process (e.g. the number of affected nodes) depends on structural properties of the underlying network. Most available results are ensemble averages over certain interesting graph classes such as random graphs or graphs with a particular degree distributions. In this paper, we focus instead on determining the expected spreading size and the probability of large spreadings for a single (but arbitrary) given network and study the computational complexity of these problems using reductions from well-known network reliability problems. We show that computing both quantities exactly is intractable, but that the expected spreading size can be efficiently approximated with Monte Carlo sampling. When nodes are weighted to reflect their importance, the problem becomes as hard as the s-t reliability problem, which is not known to yield an efficient randomized approximation scheme up to now. Finally, we give a formal complexity-theoretic argument why there is most likely no randomized constant-factor approximation for the probability of large spreadings, even for the unweighted case. A hybrid Monte Carlo sampling algorithm is proposed that resorts to specialized s-t reliability algorithms for accurately estimating the infection probability of those nodes that are rarely affected by the spreading proces
- …