685 research outputs found
Multicommodity Multicast, Wireless and Fast
We study rumor spreading in graphs, specifically multicommodity multicast problem under the wireless model: given source-destination pairs in the graph, one needs to find the fastest schedule to transfer information from each source to the corresponding destination. Under the wireless model, nodes can transmit to any subset of their neighbors in synchronous time steps, as long as they either transmit or receive from at most one transmitter during the same time step. We improve approximation ratio for this problem from O~(n^(2/3)) to O~(n^((1/2) + epsilon)) on n-node graphs. We also design an algorithm that satisfies p given demand pairs in O(OPT + p) steps, where OPT is the length of an optimal schedule, by reducing it to the well-studied packet routing problem. In the case where underlying graph is an n-node tree, we improve the previously best-known approximation ratio of O((log n)/(log log n)) to 3. One consequence of our proof is a simple constructive rule for optimal broadcasting in a tree under a widely studied telephone model
Maximum Production Of Transmission Messages Rate For Service Discovery Protocols
Minimizing the number of dropped User Datagram Protocol (UDP) messages in a network is regarded as a challenge by researchers. This issue represents serious problems for many protocols particularly those that depend on sending messages as part of their strategy, such us service discovery protocols. This paper proposes and evaluates an algorithm to predict the minimum period of time required between two or more consecutive messages and suggests the minimum queue sizes for the routers, to manage the traffic and minimise the number of dropped messages that has been caused by either congestion or queue overflow or both together. The algorithm has been applied to the Universal Plug and Play (UPnP) protocol using ns2 simulator. It was tested when the routers were connected in two configurations; as a centralized and de centralized. The message length and bandwidth of the links among the routers were taken in the consideration. The result shows Better improvement in number of dropped messages `among the routers
QuickCast: Fast and Efficient Inter-Datacenter Transfers using Forwarding Tree Cohorts
Large inter-datacenter transfers are crucial for cloud service efficiency and
are increasingly used by organizations that have dedicated wide area networks
between datacenters. A recent work uses multicast forwarding trees to reduce
the bandwidth needs and improve completion times of point-to-multipoint
transfers. Using a single forwarding tree per transfer, however, leads to poor
performance because the slowest receiver dictates the completion time for all
receivers. Using multiple forwarding trees per transfer alleviates this
concern--the average receiver could finish early; however, if done naively,
bandwidth usage would also increase and it is apriori unclear how best to
partition receivers, how to construct the multiple trees and how to determine
the rate and schedule of flows on these trees. This paper presents QuickCast, a
first solution to these problems. Using simulations on real-world network
topologies, we see that QuickCast can speed up the average receiver's
completion time by as much as while only using more
bandwidth; further, the completion time for all receivers also improves by as
much as faster at high loads.Comment: [Extended Version] Accepted for presentation in IEEE INFOCOM 2018,
Honolulu, H
Enforcing efficient equilibria in network design games via subsidies
The efficient design of networks has been an important engineering task that
involves challenging combinatorial optimization problems. Typically, a network
designer has to select among several alternatives which links to establish so
that the resulting network satisfies a given set of connectivity requirements
and the cost of establishing the network links is as low as possible. The
Minimum Spanning Tree problem, which is well-understood, is a nice example.
In this paper, we consider the natural scenario in which the connectivity
requirements are posed by selfish users who have agreed to share the cost of
the network to be established according to a well-defined rule. The design
proposed by the network designer should now be consistent not only with the
connectivity requirements but also with the selfishness of the users.
Essentially, the users are players in a so-called network design game and the
network designer has to propose a design that is an equilibrium for this game.
As it is usually the case when selfishness comes into play, such equilibria may
be suboptimal. In this paper, we consider the following question: can the
network designer enforce particular designs as equilibria or guarantee that
efficient designs are consistent with users' selfishness by appropriately
subsidizing some of the network links? In an attempt to understand this
question, we formulate corresponding optimization problems and present positive
and negative results.Comment: 30 pages, 7 figure
Necessary and sufficient conditions for optimal flow control in multirate multicast networks
The authors consider the optimal flow control problem in multirate multicast networks where all receivers of the same multicast group can receive service at different rates with different QoS. The objective is to achieve the fairness transmission rates that maximise the total receiver utility under the capacity constraint of links. They first propose necessary and sufficient conditions for the optimal solution to the problem, and then derive a new optimal flow control strategy using the Lagrangian multiplier method. Like the unicast case, the basic algorithm consists of a link algorithm to update the link price, and a receiver algorithm to adapt the transmission rate according to the link prices along its path. In particular if some groups contain only one receiver and become unicast, the algorithm will degrade to their previously proposed unicast algorithm
Approximate Pure Nash Equilibria in Weighted Congestion Games: Existence, Efficient Computation, and Structure
We consider structural and algorithmic questions related to the Nash dynamics
of weighted congestion games. In weighted congestion games with linear latency
functions, the existence of (pure Nash) equilibria is guaranteed by potential
function arguments. Unfortunately, this proof of existence is inefficient and
computing equilibria is such games is a {\sf PLS}-hard problem. The situation
gets worse when superlinear latency functions come into play; in this case, the
Nash dynamics of the game may contain cycles and equilibria may not even exist.
Given these obstacles, we consider approximate equilibria as alternative
solution concepts. Do such equilibria exist? And if so, can we compute them
efficiently?
We provide positive answers to both questions for weighted congestion games
with polynomial latency functions by exploiting an "approximation" of such
games by a new class of potential games that we call -games. This allows
us to show that these games have -approximate equilibria, where is the
maximum degree of the latency functions. Our main technical contribution is an
efficient algorithm for computing O(1)-approximate equilibria when is a
constant. For games with linear latency functions, the approximation guarantee
is for arbitrarily small ; for
latency functions with maximum degree , it is . The
running time is polynomial in the number of bits in the representation of the
game and . As a byproduct of our techniques, we also show the
following structural statement for weighted congestion games with polynomial
latency functions of maximum degree : polynomially-long sequences of
best-response moves from any initial state to a -approximate
equilibrium exist and can be efficiently identified in such games as long as
is constant.Comment: 31 page
Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff
Replicating or caching popular content in memories distributed across the
network is a technique to reduce peak network loads. Conventionally, the main
performance gain of this caching was thought to result from making part of the
requested data available closer to end users. Instead, we recently showed that
a much more significant gain can be achieved by using caches to create
coded-multicasting opportunities, even for users with different demands,
through coding across data streams. These coded-multicasting opportunities are
enabled by careful content overlap at the various caches in the network,
created by a central coordinating server.
In many scenarios, such a central coordinating server may not be available,
raising the question if this multicasting gain can still be achieved in a more
decentralized setting. In this paper, we propose an efficient caching scheme,
in which the content placement is performed in a decentralized manner. In other
words, no coordination is required for the content placement. Despite this lack
of coordination, the proposed scheme is nevertheless able to create
coded-multicasting opportunities and achieves a rate close to the optimal
centralized scheme.Comment: To appear in IEEE/ACM Transactions on Networkin
- …