5,427 research outputs found
DOH: A Content Delivery Peer-to-Peer Network
Many SMEs and non-pro¯t organizations su®er when their Web
servers become unavailable due to °ash crowd e®ects when their web site
becomes popular. One of the solutions to the °ash-crowd problem is to place
the web site on a scalable CDN (Content Delivery Network) that replicates
the content and distributes the load in order to improve its response time.
In this paper, we present our approach to building a scalable Web Hosting
environment as a CDN on top of a structured peer-to-peer system of collaborative
web-servers integrated to share the load and to improve the overall
system performance, scalability, availability and robustness. Unlike clusterbased
solutions, it can run on heterogeneous hardware, over geographically
dispersed areas. To validate and evaluate our approach, we have developed a
system prototype called DOH (DKS Organized Hosting) that is a CDN implemented
on top of the DKS (Distributed K-nary Search) structured P2P
system with DHT (Distributed Hash table) functionality [9]. The prototype
is implemented in Java, using the DKS middleware, the Jetty web-server, and
a modi¯ed JavaFTP server. The proposed design of CDN has been evaluated
by simulation and by evaluation experiments on the prototype
A Parallel Adaptive P3M code with Hierarchical Particle Reordering
We discuss the design and implementation of HYDRA_OMP a parallel
implementation of the Smoothed Particle Hydrodynamics-Adaptive P3M (SPH-AP3M)
code HYDRA. The code is designed primarily for conducting cosmological
hydrodynamic simulations and is written in Fortran77+OpenMP. A number of
optimizations for RISC processors and SMP-NUMA architectures have been
implemented, the most important optimization being hierarchical reordering of
particles within chaining cells, which greatly improves data locality thereby
removing the cache misses typically associated with linked lists. Parallel
scaling is good, with a minimum parallel scaling of 73% achieved on 32 nodes
for a variety of modern SMP architectures. We give performance data in terms of
the number of particle updates per second, which is a more useful performance
metric than raw MFlops. A basic version of the code will be made available to
the community in the near future.Comment: 34 pages, 12 figures, accepted for publication in Computer Physics
Communication
BriskStream: Scaling Data Stream Processing on Shared-Memory Multicore Architectures
We introduce BriskStream, an in-memory data stream processing system (DSPSs)
specifically designed for modern shared-memory multicore architectures.
BriskStream's key contribution is an execution plan optimization paradigm,
namely RLAS, which takes relative-location (i.e., NUMA distance) of each pair
of producer-consumer operators into consideration. We propose a branch and
bound based approach with three heuristics to resolve the resulting nontrivial
optimization problem. The experimental evaluations demonstrate that BriskStream
yields much higher throughput and better scalability than existing DSPSs on
multi-core architectures when processing different types of workloads.Comment: To appear in SIGMOD'1
A Lightweight Distributed Solution to Content Replication in Mobile Networks
Performance and reliability of content access in mobile networks is
conditioned by the number and location of content replicas deployed at the
network nodes. Facility location theory has been the traditional, centralized
approach to study content replication: computing the number and placement of
replicas in a network can be cast as an uncapacitated facility location
problem. The endeavour of this work is to design a distributed, lightweight
solution to the above joint optimization problem, while taking into account the
network dynamics. In particular, we devise a mechanism that lets nodes share
the burden of storing and providing content, so as to achieve load balancing,
and decide whether to replicate or drop the information so as to adapt to a
dynamic content demand and time-varying topology. We evaluate our mechanism
through simulation, by exploring a wide range of settings and studying
realistic content access mechanisms that go beyond the traditional
assumptionmatching demand points to their closest content replica. Results show
that our mechanism, which uses local measurements only, is: (i) extremely
precise in approximating an optimal solution to content placement and
replication; (ii) robust against network mobility; (iii) flexible in
accommodating various content access patterns, including variation in time and
space of the content demand.Comment: 12 page
Caching with Partial Adaptive Matching
We study the caching problem when we are allowed to match each user to one of
a subset of caches after its request is revealed. We focus on non-uniformly
popular content, specifically when the file popularities obey a Zipf
distribution. We study two extremal schemes, one focusing on coded server
transmissions while ignoring matching capabilities, and the other focusing on
adaptive matching while ignoring potential coding opportunities. We derive the
rates achieved by these schemes and characterize the regimes in which one
outperforms the other. We also compare them to information-theoretic outer
bounds, and finally propose a hybrid scheme that generalizes ideas from the two
schemes and performs at least as well as either of them in most memory regimes.Comment: 35 pages, 7 figures. Shorter versions have appeared in IEEE ISIT 2017
and IEEE ITW 201
- …