3,658 research outputs found
Exploring heterogeneity of unreliable machines for p2p backup
P2P architecture is a viable option for enterprise backup. In contrast to
dedicated backup servers, nowadays a standard solution, making backups directly
on organization's workstations should be cheaper (as existing hardware is
used), more efficient (as there is no single bottleneck server) and more
reliable (as the machines are geographically dispersed).
We present the architecture of a p2p backup system that uses pairwise
replication contracts between a data owner and a replicator. In contrast to
standard p2p storage systems using directly a DHT, the contracts allow our
system to optimize replicas' placement depending on a specific optimization
strategy, and so to take advantage of the heterogeneity of the machines and the
network. Such optimization is particularly appealing in the context of backup:
replicas can be geographically dispersed, the load sent over the network can be
minimized, or the optimization goal can be to minimize the backup/restore time.
However, managing the contracts, keeping them consistent and adjusting them in
response to dynamically changing environment is challenging.
We built a scientific prototype and ran the experiments on 150 workstations
in the university's computer laboratories and, separately, on 50 PlanetLab
nodes. We found out that the main factor affecting the quality of the system is
the availability of the machines. Yet, our main conclusion is that it is
possible to build an efficient and reliable backup system on highly unreliable
machines (our computers had just 13% average availability)
A Lightweight Distributed Solution to Content Replication in Mobile Networks
Performance and reliability of content access in mobile networks is
conditioned by the number and location of content replicas deployed at the
network nodes. Facility location theory has been the traditional, centralized
approach to study content replication: computing the number and placement of
replicas in a network can be cast as an uncapacitated facility location
problem. The endeavour of this work is to design a distributed, lightweight
solution to the above joint optimization problem, while taking into account the
network dynamics. In particular, we devise a mechanism that lets nodes share
the burden of storing and providing content, so as to achieve load balancing,
and decide whether to replicate or drop the information so as to adapt to a
dynamic content demand and time-varying topology. We evaluate our mechanism
through simulation, by exploring a wide range of settings and studying
realistic content access mechanisms that go beyond the traditional
assumptionmatching demand points to their closest content replica. Results show
that our mechanism, which uses local measurements only, is: (i) extremely
precise in approximating an optimal solution to content placement and
replication; (ii) robust against network mobility; (iii) flexible in
accommodating various content access patterns, including variation in time and
space of the content demand.Comment: 12 page
Asymptotic Laws for Joint Content Replication and Delivery in Wireless Networks
We investigate on the scalability of multihop wireless communications, a
major concern in networking, for the case that users access content replicated
across the nodes. In contrast to the standard paradigm of randomly selected
communicating pairs, content replication is efficient for certain regimes of
file popularity, cache and network size. Our study begins with the detailed
joint content replication and delivery problem on a 2D square grid, a hard
combinatorial optimization. This is reduced to a simpler problem based on
replication density, whose performance is of the same order as the original.
Assuming a Zipf popularity law, and letting the size of content and network
both go to infinity, we identify the scaling laws and regimes of the required
link capacity, ranging from O(\sqrt{N}) down to O(1)
- …