1,684 research outputs found
Algorithms for Constructing Overlay Networks For Live Streaming
We present a polynomial time approximation algorithm for constructing an
overlay multicast network for streaming live media events over the Internet.
The class of overlay networks constructed by our algorithm include networks
used by Akamai Technologies to deliver live media events to a global audience
with high fidelity. We construct networks consisting of three stages of nodes.
The nodes in the first stage are the entry points that act as sources for the
live streams. Each source forwards each of its streams to one or more nodes in
the second stage that are called reflectors. A reflector can split an incoming
stream into multiple identical outgoing streams, which are then sent on to
nodes in the third and final stage that act as sinks and are located in edge
networks near end-users. As the packets in a stream travel from one stage to
the next, some of them may be lost. A sink combines the packets from multiple
instances of the same stream (by reordering packets and discarding duplicates)
to form a single instance of the stream with minimal loss. Our primary
contribution is an algorithm that constructs an overlay network that provably
satisfies capacity and reliability constraints to within a constant factor of
optimal, and minimizes cost to within a logarithmic factor of optimal. Further
in the common case where only the transmission costs are minimized, we show
that our algorithm produces a solution that has cost within a factor of 2 of
optimal. We also implement our algorithm and evaluate it on realistic traces
derived from Akamai's live streaming network. Our empirical results show that
our algorithm can be used to efficiently construct large-scale overlay networks
in practice with near-optimal cost
WiLiTV: A Low-Cost Wireless Framework for Live TV Services
With the evolution of HDTV and Ultra HDTV, the bandwidth requirement for
IP-based TV content is rapidly increasing. Consumers demand uninterrupted
service with a high Quality of Experience (QoE). Service providers are
constantly trying to differentiate themselves by innovating new ways of
distributing content more efficiently with lower cost and higher penetration.
In this work, we propose a cost-efficient wireless framework (WiLiTV) for
delivering live TV services, consisting of a mix of wireless access
technologies (e.g. Satellite, WiFi and LTE overlay links). In the proposed
architecture, live TV content is injected into the network at a few residential
locations using satellite dishes. The content is then further distributed to
other homes using a house-to-house WiFi network or via an overlay LTE network.
Our problem is to construct an optimal TV distribution network with the minimum
number of satellite injection points, while preserving the highest QoE, for
different neighborhood densities. We evaluate the framework using realistic
time-varying demand patterns and a diverse set of home location data. Our study
demonstrates that the architecture requires 75 - 90% fewer satellite injection
points, compared to traditional architectures. Furthermore, we show that most
cost savings can be obtained using simple and practical relay routing
solutions
A Novel Network Coded Parallel Transmission Framework for High-Speed Ethernet
Parallel transmission, as defined in high-speed Ethernet standards, enables
to use less expensive optoelectronics and offers backwards compatibility with
legacy Optical Transport Network (OTN) infrastructure. However, optimal
parallel transmission does not scale to large networks, as it requires
computationally expensive multipath routing algorithms to minimize differential
delay, and thus the required buffer size, optimize traffic splitting ratio, and
ensure frame synchronization. In this paper, we propose a novel framework for
high-speed Ethernet, which we refer to as network coded parallel transmission,
capable of effective buffer management and frame synchronization without the
need for complex multipath algorithms in the OTN layer. We show that using
network coding can reduce the delay caused by packet reordering at the
receiver, thus requiring a smaller overall buffer size, while improving the
network throughput. We design the framework in full compliance with high-speed
Ethernet standards specified in IEEE802.3ba and present solutions for network
encoding, data structure of coded parallel transmission, buffer management and
decoding at the receiver side. The proposed network coded parallel transmission
framework is simple to implement and represents a potential major breakthrough
in the system design of future high-speed Ethernet.Comment: 6 pages, 8 figures, Submitted to Globecom201
Particle swarm optimization for the Steiner tree in graph and delay-constrained multicast routing problems
This paper presents the first investigation on applying a particle swarm optimization (PSO) algorithm to both the Steiner tree problem and the delay constrained multicast routing problem. Steiner tree problems, being the underlining models of many applications, have received significant research attention within the meta-heuristics community. The literature on the application of meta-heuristics to multicast routing problems is less extensive but includes several promising approaches. Many interesting research issues still remain to be investigated, for example, the inclusion of different constraints, such as delay bounds, when finding multicast trees with minimum cost. In this paper, we develop a novel PSO algorithm based on the jumping PSO (JPSO) algorithm recently developed by Moreno-Perez et al. (Proc. of the 7th Metaheuristics International Conference, 2007), and also propose two novel local search heuristics within our JPSO framework. A path replacement operator has been used in particle moves to improve the positions of the particle with regard to the structure of the tree. We test the performance of our JPSO algorithm, and the effect of the integrated local search heuristics by an extensive set of experiments on multicast routing benchmark problems and Steiner tree problems from the OR library. The experimental results show the superior performance of the proposed JPSO algorithm over a number of other state-of-the-art approaches
Performance Modelling and Optimisation of Multi-hop Networks
A major challenge in the design of large-scale networks is to predict and optimise the
total time and energy consumption required to deliver a packet from a source node to a
destination node. Examples of such complex networks include wireless ad hoc and sensor
networks which need to deal with the effects of node mobility, routing inaccuracies, higher
packet loss rates, limited or time-varying effective bandwidth, energy constraints, and the
computational limitations of the nodes. They also include more reliable communication
environments, such as wired networks, that are susceptible to random failures, security
threats and malicious behaviours which compromise their quality of service (QoS) guarantees.
In such networks, packets traverse a number of hops that cannot be determined
in advance and encounter non-homogeneous network conditions that have been largely
ignored in the literature. This thesis examines analytical properties of packet travel in
large networks and investigates the implications of some packet coding techniques on both
QoS and resource utilisation.
Specifically, we use a mixed jump and diffusion model to represent packet traversal
through large networks. The model accounts for network non-homogeneity regarding
routing and the loss rate that a packet experiences as it passes successive segments of a
source to destination route. A mixed analytical-numerical method is developed to compute
the average packet travel time and the energy it consumes. The model is able to capture
the effects of increased loss rate in areas remote from the source and destination, variable
rate of advancement towards destination over the route, as well as of defending against
malicious packets within a certain distance from the destination. We then consider sending
multiple coded packets that follow independent paths to the destination node so as to
mitigate the effects of losses and routing inaccuracies. We study a homogeneous medium
and obtain the time-dependent properties of the packet’s travel process, allowing us to
compare the merits and limitations of coding, both in terms of delivery times and energy
efficiency. Finally, we propose models that can assist in the analysis and optimisation
of the performance of inter-flow network coding (NC). We analyse two queueing models
for a router that carries out NC, in addition to its standard packet routing function. The
approach is extended to the study of multiple hops, which leads to an optimisation problem
that characterises the optimal time that packets should be held back in a router, waiting
for coding opportunities to arise, so that the total packet end-to-end delay is minimised
DecVi: Adaptive Video Conferencing on Open Peer-to-Peer Networks
Video conferencing has become the preferred way of interacting virtually.
Current video conferencing applications, like Zoom, Teams or WebEx, are
centralized, cloud-based platforms whose performance crucially depends on the
proximity of clients to their data centers. Clients from low-income countries
are particularly affected as most data centers from major cloud providers are
located in economically advanced nations. Centralized conferencing applications
also suffer from occasional outages and are embattled by serious privacy
violation allegations. In recent years, decentralized video conferencing
applications built over p2p networks and incentivized through blockchain are
becoming popular. A key characteristic of these networks is their openness:
anyone can host a media server on the network and gain reward for providing
service. Strong economic incentives combined with lower entry barrier to join
the network, makes increasing server coverage to even remote regions of the
world. These reasons, however, also lead to a security problem: a server may
obfuscate its true location in order to gain an unfair business advantage. In
this paper, we consider the problem of multicast tree construction for video
conferencing sessions in open p2p conferencing applications. We propose DecVi,
a decentralized multicast tree construction protocol that adaptively discovers
efficient tree structures based on an exploration-exploitation framework. DecVi
is motivated by the combinatorial multi-armed bandit problem and uses a
succinct learning model to compute effective actions. Despite operating in a
multi-agent setting with each server having only limited knowledge of the
global network and without cooperation among servers, experimentally we show
DecVi achieves similar quality-of-experience compared to a centralized globally
optimal algorithm while achieving higher reliability and flexibility
Variation principle and the universal metric of dynamic routing
In this paper the variation principles from theoretical physics is considered
that would describe the process of routing in computer networks. The total
traffic which is currently served on all hops of the route has been chosen as
the quantity to minimize. Universal metric function has been found for dynamic
routing taking into account the packet loss effect. An attempt to derive the
metric of the most popular dynamic routing protocols such as RIP, OSPF, EIGRP
from universal metric was made.Comment: 4 pages, 3 figures, 14 equation
Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff
Replicating or caching popular content in memories distributed across the
network is a technique to reduce peak network loads. Conventionally, the main
performance gain of this caching was thought to result from making part of the
requested data available closer to end users. Instead, we recently showed that
a much more significant gain can be achieved by using caches to create
coded-multicasting opportunities, even for users with different demands,
through coding across data streams. These coded-multicasting opportunities are
enabled by careful content overlap at the various caches in the network,
created by a central coordinating server.
In many scenarios, such a central coordinating server may not be available,
raising the question if this multicasting gain can still be achieved in a more
decentralized setting. In this paper, we propose an efficient caching scheme,
in which the content placement is performed in a decentralized manner. In other
words, no coordination is required for the content placement. Despite this lack
of coordination, the proposed scheme is nevertheless able to create
coded-multicasting opportunities and achieves a rate close to the optimal
centralized scheme.Comment: To appear in IEEE/ACM Transactions on Networkin
- …