17 research outputs found
Adaptive Replication in Distributed Content Delivery Networks
We address the problem of content replication in large distributed content
delivery networks, composed of a data center assisted by many small servers
with limited capabilities and located at the edge of the network. The objective
is to optimize the placement of contents on the servers to offload as much as
possible the data center. We model the system constituted by the small servers
as a loss network, each loss corresponding to a request to the data center.
Based on large system / storage behavior, we obtain an asymptotic formula for
the optimal replication of contents and propose adaptive schemes related to
those encountered in cache networks but reacting here to loss events, and
faster algorithms generating virtual events at higher rate while keeping the
same target replication. We show through simulations that our adaptive schemes
outperform significantly standard replication strategies both in terms of loss
rates and adaptation speed.Comment: 10 pages, 5 figure
Bootstrapping the Long Tail in Peer to Peer Systems
We describe an efficient incentive mechanism for P2P systems that generates a
wide diversity of content offerings while responding adaptively to customer
demand. Files are served and paid for through a parimutuel market similar to
that commonly used for betting in horse races. An analysis of the performance
of such a system shows that there exists an equilibrium with a long tail in the
distribution of content offerings, which guarantees the real time provision of
any content regardless of its popularity
Mobility-Aware Edge Caching for Connected Cars
Content caching on the edge of 5G networks is an emerging and critical feature to support the thirst for content of future connected cars. Yet, the compactization of 5G cells, the finite edge storage capacity and the need for content availability while driving motivate the need to develop smart edge caching strategies adapted to the mobility characteristics of connected cars. In this paper, we propose a Mobility-Aware Probabilistic (MAP) scheme, which optimally caches content at edge nodes where connected vehicles mostly require it. Unlike blind popularity decisions, the probabilistic caching used by MAP considers
vehicular trajectory predictions as well as content service time by edge nodes. We evaluate our approach on realistic mobility datasets and against popularity-based edge approaches. Our MAP edge caching scheme provides up to 40% enhanced content availability, 70% increased cache throughput, and 40% reduced backhaul overhead compared to popularity-based strategies
Video Traffic Flow Analysis in Distributed System during Interactive Session
Cost effective, smooth multimedia streaming to the remote customer through the distributed “video on demand” architecture is the most challenging research issue over the decade. The hierarchical system design is used for distributed network to satisfy more requesting users. The distributed hierarchical network system contains all the local and remote storage multimedia servers. The hierarchical network system is used to provide continuous availability of the data stream to the requesting customer. In this work, we propose a novel data stream that handles the methodology for reducing the connection failure and smooth multimedia stream delivery to the remote customer. The proposed session based single-user bandwidth requirement model presents the bandwidth requirement for any interactive session like pause, move slowly, rewind, skip some of the frame, and move fast with some constant number of frames. The proposed session based optimum storage finding algorithm reduces the search hop count towards the remote storage-data server. The modeling and simulation result shows the better impact over the distributed system architecture. This work presents the novel bandwidth requirement model at the interactive session and gives the trade-off in communication and storage costs for different system resource configurations
Optimal Content Replication and Request Matching in Large Caching Systems
We consider models of content delivery networks in which the servers are
constrained by two main resources: memory and bandwidth. In such systems, the
throughput crucially depends on how contents are replicated across servers and
how the requests of specific contents are matched to servers storing those
contents. In this paper, we first formulate the problem of computing the
optimal replication policy which if combined with the optimal matching policy
maximizes the throughput of the caching system in the stationary regime. It is
shown that computing the optimal replication policy for a given system is an
NP-hard problem. A greedy replication scheme is proposed and it is shown that
the scheme provides a constant factor approximation guarantee. We then propose
a simple randomized matching scheme which avoids the problem of interruption in
service of the ongoing requests due to re-assignment or repacking of the
existing requests in the optimal matching policy. The dynamics of the caching
system is analyzed under the combination of proposed replication and matching
schemes. We study a limiting regime, where the number of servers and the
arrival rates of the contents are scaled proportionally, and show that the
proposed policies achieve asymptotic optimality. Extensive simulation results
are presented to evaluate the performance of different policies and study the
behavior of the caching system under different service time distributions of
the requests.Comment: INFOCOM 201
Bipartite graph structures for efficient balancing of heterogeneous loads
International audienceThis paper considers large scale distributed content service platforms, such as peer-to-peer video-on-demand systems. Such systems feature two basic resources, namely storage and bandwidth. Their efficiency critically depends on two factors: (i) content replication within servers, and (ii) how incoming service requests are matched to servers holding requested content. To inform the corresponding design choices, we make the following contributions. We first show that, for underloaded systems, so-called proportional content placement with a simple greedy strategy for matching requests to servers ensures full system efficiency provided storage size grows logarithmically with the system size. However, for constant storage size, this strategy undergoes a phase transition with severe loss of efficiency as system load approaches criticality. To better understand the role of the matching strategy in this performance degradation, we characterize the asymptotic system efficiency under an optimal matching policy. Our analysis shows that -in contrast to greedy matching- optimal matching incurs an inefficiency that is exponentially small in the server storage size, even at critical system loads. It further allows a characterization of content replication policies that minimize the inefficiency. These optimal policies, which differ markedly from proportional placement, have a simple structure which makes them implementable in practice. On the methodological side, our analysis of matching performance uses the theory of local weak limits of random graphs, and highlights a novel characterization of matching numbers in bipartite graphs, which may both be of independent interest