2,913 research outputs found
Optimal Data Placement on Networks With Constant Number of Clients
We introduce optimal algorithms for the problems of data placement (DP) and
page placement (PP) in networks with a constant number of clients each of which
has limited storage availability and issues requests for data objects. The
objective for both problems is to efficiently utilize each client's storage
(deciding where to place replicas of objects) so that the total incurred access
and installation cost over all clients is minimized. In the PP problem an extra
constraint on the maximum number of clients served by a single client must be
satisfied. Our algorithms solve both problems optimally when all objects have
uniform lengths. When objects lengths are non-uniform we also find the optimal
solution, albeit a small, asymptotically tight violation of each client's
storage size by lmax where lmax is the maximum length of the objects
and some arbitrarily small positive constant. We make no assumption
on the underlying topology of the network (metric, ultrametric etc.), thus
obtaining the first non-trivial results for non-metric data placement problems
Unconstrained and Constrained Fault-Tolerant Resource Allocation
First, we study the Unconstrained Fault-Tolerant Resource Allocation (UFTRA)
problem (a.k.a. FTFA problem in \cite{shihongftfa}). In the problem, we are
given a set of sites equipped with an unconstrained number of facilities as
resources, and a set of clients with set as corresponding
connection requirements, where every facility belonging to the same site has an
identical opening (operating) cost and every client-facility pair has a
connection cost. The objective is to allocate facilities from sites to satisfy
at a minimum total cost. Next, we introduce the Constrained
Fault-Tolerant Resource Allocation (CFTRA) problem. It differs from UFTRA in
that the number of resources available at each site is limited by .
Both problems are practical extensions of the classical Fault-Tolerant Facility
Location (FTFL) problem \cite{Jain00FTFL}. For instance, their solutions
provide optimal resource allocation (w.r.t. enterprises) and leasing (w.r.t.
clients) strategies for the contemporary cloud platforms.
In this paper, we consider the metric version of the problems. For UFTRA with
uniform , we present a star-greedy algorithm. The algorithm
achieves the approximation ratio of 1.5186 after combining with the cost
scaling and greedy augmentation techniques similar to
\cite{Charikar051.7281.853,Mahdian021.52}, which significantly improves the
result of \cite{shihongftfa} using a phase-greedy algorithm. We also study the
capacitated extension of UFTRA and give a factor of 2.89. For CFTRA with
uniform , we slightly modify the algorithm to achieve
1.5186-approximation. For a more general version of CFTRA, we show that it is
reducible to FTFL using linear programming
A Unified Framework of FPT Approximation Algorithms for Clustering Problems
In this paper, we present a framework for designing FPT approximation algorithms for many k-clustering problems. Our results are based on a new technique for reducing search spaces. A reduced search space is a small subset of the input data that has the guarantee of containing k clients close to the facilities opened in an optimal solution for any clustering problem we consider. We show, somewhat surprisingly, that greedily sampling O(k) clients yields the desired reduced search space, based on which we obtain FPT(k)-time algorithms with improved approximation guarantees for problems such as capacitated clustering, lower-bounded clustering, clustering with service installation costs, fault tolerant clustering, and priority clustering
Algorithms for Constructing Overlay Networks For Live Streaming
We present a polynomial time approximation algorithm for constructing an
overlay multicast network for streaming live media events over the Internet.
The class of overlay networks constructed by our algorithm include networks
used by Akamai Technologies to deliver live media events to a global audience
with high fidelity. We construct networks consisting of three stages of nodes.
The nodes in the first stage are the entry points that act as sources for the
live streams. Each source forwards each of its streams to one or more nodes in
the second stage that are called reflectors. A reflector can split an incoming
stream into multiple identical outgoing streams, which are then sent on to
nodes in the third and final stage that act as sinks and are located in edge
networks near end-users. As the packets in a stream travel from one stage to
the next, some of them may be lost. A sink combines the packets from multiple
instances of the same stream (by reordering packets and discarding duplicates)
to form a single instance of the stream with minimal loss. Our primary
contribution is an algorithm that constructs an overlay network that provably
satisfies capacity and reliability constraints to within a constant factor of
optimal, and minimizes cost to within a logarithmic factor of optimal. Further
in the common case where only the transmission costs are minimized, we show
that our algorithm produces a solution that has cost within a factor of 2 of
optimal. We also implement our algorithm and evaluate it on realistic traces
derived from Akamai's live streaming network. Our empirical results show that
our algorithm can be used to efficiently construct large-scale overlay networks
in practice with near-optimal cost
- …