4,302 research outputs found
Redundancy Scheduling with Locally Stable Compatibility Graphs
Redundancy scheduling is a popular concept to improve performance in
parallel-server systems. In the baseline scenario any job can be handled
equally well by any server, and is replicated to a fixed number of servers
selected uniformly at random. Quite often however, there may be heterogeneity
in job characteristics or server capabilities, and jobs can only be replicated
to specific servers because of affinity relations or compatibility constraints.
In order to capture such situations, we consider a scenario where jobs of
various types are replicated to different subsets of servers as prescribed by a
general compatibility graph. We exploit a product-form stationary distribution
and weak local stability conditions to establish a state space collapse in
heavy traffic. In this limiting regime, the parallel-server system with
graph-based redundancy scheduling operates as a multi-class single-server
system, achieving full resource pooling and exhibiting strong insensitivity to
the underlying compatibility constraints.Comment: 28 pages, 4 figure
Adaptive Replication in Distributed Content Delivery Networks
We address the problem of content replication in large distributed content
delivery networks, composed of a data center assisted by many small servers
with limited capabilities and located at the edge of the network. The objective
is to optimize the placement of contents on the servers to offload as much as
possible the data center. We model the system constituted by the small servers
as a loss network, each loss corresponding to a request to the data center.
Based on large system / storage behavior, we obtain an asymptotic formula for
the optimal replication of contents and propose adaptive schemes related to
those encountered in cache networks but reacting here to loss events, and
faster algorithms generating virtual events at higher rate while keeping the
same target replication. We show through simulations that our adaptive schemes
outperform significantly standard replication strategies both in terms of loss
rates and adaptation speed.Comment: 10 pages, 5 figure
Prioritized Random MAC Optimization via Graph-based Analysis
Motivated by the analogy between successive interference cancellation and
iterative belief-propagation on erasure channels, irregular repetition slotted
ALOHA (IRSA) strategies have received a lot of attention in the design of
medium access control protocols. The IRSA schemes have been mostly analyzed for
theoretical scenarios for homogenous sources, where they are shown to
substantially improve the system performance compared to classical slotted
ALOHA protocols. In this work, we consider generic systems where sources in
different importance classes compete for a common channel. We propose a new
prioritized IRSA algorithm and derive the probability to correctly resolve
collisions for data from each source class. We then make use of our theoretical
analysis to formulate a new optimization problem for selecting the transmission
strategies of heterogenous sources. We optimize both the replication
probability per class and the source rate per class, in such a way that the
overall system utility is maximized. We then propose a heuristic-based
algorithm for the selection of the transmission strategy, which is built on
intrinsic characteristics of the iterative decoding methods adopted for
recovering from collisions. Experimental results validate the accuracy of the
theoretical study and show the gain of well-chosen prioritized transmission
strategies for transmission of data from heterogenous classes over shared
wireless channels
Optimal Content Replication and Request Matching in Large Caching Systems
We consider models of content delivery networks in which the servers are
constrained by two main resources: memory and bandwidth. In such systems, the
throughput crucially depends on how contents are replicated across servers and
how the requests of specific contents are matched to servers storing those
contents. In this paper, we first formulate the problem of computing the
optimal replication policy which if combined with the optimal matching policy
maximizes the throughput of the caching system in the stationary regime. It is
shown that computing the optimal replication policy for a given system is an
NP-hard problem. A greedy replication scheme is proposed and it is shown that
the scheme provides a constant factor approximation guarantee. We then propose
a simple randomized matching scheme which avoids the problem of interruption in
service of the ongoing requests due to re-assignment or repacking of the
existing requests in the optimal matching policy. The dynamics of the caching
system is analyzed under the combination of proposed replication and matching
schemes. We study a limiting regime, where the number of servers and the
arrival rates of the contents are scaled proportionally, and show that the
proposed policies achieve asymptotic optimality. Extensive simulation results
are presented to evaluate the performance of different policies and study the
behavior of the caching system under different service time distributions of
the requests.Comment: INFOCOM 201
- …