20,816 research outputs found
Connectivity measures for internet topologies.
The topology of the Internet has initially been modelled as an undirected graph, where vertices correspond to so-called Autonomous Systems (ASs),and edges correspond to physical links between pairs of ASs. However, in order to capture the impact of routing policies, it has recently become apparent that one needs to classify the edges according to the existing economic relationships (customer-provider, peer-to-peer or siblings) between the ASs. This leads to a directed graph model in which traffic can be sent only along so-called valley-free paths. Four different algorithms have been proposed in the literature for inferring AS relationships using publicly available data from routing tables. We investigate the differences in the graph models produced by these algorithms, focussing on connectivity measures. To this aim, we compute the maximum number of vertex-disjoint valley-free paths between ASs as well as the size of a minimum cut separating a pair of ASs. Although these problems are solvable in polynomial time for ordinary graphs, they are NP-hard in our setting. We formulate the two problems as integer programs, and we propose a number of exact algorithms for solving them. For the problem of finding the maximum number of vertex-disjoint paths, we discuss two algorithms; the first one is a branch-and-price algorithm based on the IP formulation, and the second algorithm is a non LP based branch-and-bound algorithm. For the problem of finding minimum cuts we use a branch-and-cut algo rithm, based on the IP formulation of this problem. Using these algorithms, we obtain exact solutions for both problems in reasonable time. It turns out that there is a large gap in terms of the connectivity measures between the undirected and directed models. This finding supports our conclusion that economic relationships need to be taken into account when building a topology of the Internet.Research; Internet;
Coverage, Matching, and Beyond: New Results on Budgeted Mechanism Design
We study a type of reverse (procurement) auction problems in the presence of
budget constraints. The general algorithmic problem is to purchase a set of
resources, which come at a cost, so as not to exceed a given budget and at the
same time maximize a given valuation function. This framework captures the
budgeted version of several well known optimization problems, and when the
resources are owned by strategic agents the goal is to design truthful and
budget feasible mechanisms, i.e. elicit the true cost of the resources and
ensure the payments of the mechanism do not exceed the budget. Budget
feasibility introduces more challenges in mechanism design, and we study
instantiations of this problem for certain classes of submodular and XOS
valuation functions. We first obtain mechanisms with an improved approximation
ratio for weighted coverage valuations, a special class of submodular functions
that has already attracted attention in previous works. We then provide a
general scheme for designing randomized and deterministic polynomial time
mechanisms for a class of XOS problems. This class contains problems whose
feasible set forms an independence system (a more general structure than
matroids), and some representative problems include, among others, finding
maximum weighted matchings, maximum weighted matroid members, and maximum
weighted 3D-matchings. For most of these problems, only randomized mechanisms
with very high approximation ratios were known prior to our results
General Bounds for Incremental Maximization
We propose a theoretical framework to capture incremental solutions to
cardinality constrained maximization problems. The defining characteristic of
our framework is that the cardinality/support of the solution is bounded by a
value that grows over time, and we allow the solution to be
extended one element at a time. We investigate the best-possible competitive
ratio of such an incremental solution, i.e., the worst ratio over all
between the incremental solution after steps and an optimum solution of
cardinality . We define a large class of problems that contains many
important cardinality constrained maximization problems like maximum matching,
knapsack, and packing/covering problems. We provide a general
-competitive incremental algorithm for this class of problems, and show
that no algorithm can have competitive ratio below in general.
In the second part of the paper, we focus on the inherently incremental
greedy algorithm that increases the objective value as much as possible in each
step. This algorithm is known to be -competitive for submodular objective
functions, but it has unbounded competitive ratio for the class of incremental
problems mentioned above. We define a relaxed submodularity condition for the
objective function, capturing problems like maximum (weighted) (-)matching
and a variant of the maximum flow problem. We show that the greedy algorithm
has competitive ratio (exactly) for the class of problems that satisfy
this relaxed submodularity condition.
Note that our upper bounds on the competitive ratios translate to
approximation ratios for the underlying cardinality constrained problems.Comment: fixed typo
Algorithms for Constructing Overlay Networks For Live Streaming
We present a polynomial time approximation algorithm for constructing an
overlay multicast network for streaming live media events over the Internet.
The class of overlay networks constructed by our algorithm include networks
used by Akamai Technologies to deliver live media events to a global audience
with high fidelity. We construct networks consisting of three stages of nodes.
The nodes in the first stage are the entry points that act as sources for the
live streams. Each source forwards each of its streams to one or more nodes in
the second stage that are called reflectors. A reflector can split an incoming
stream into multiple identical outgoing streams, which are then sent on to
nodes in the third and final stage that act as sinks and are located in edge
networks near end-users. As the packets in a stream travel from one stage to
the next, some of them may be lost. A sink combines the packets from multiple
instances of the same stream (by reordering packets and discarding duplicates)
to form a single instance of the stream with minimal loss. Our primary
contribution is an algorithm that constructs an overlay network that provably
satisfies capacity and reliability constraints to within a constant factor of
optimal, and minimizes cost to within a logarithmic factor of optimal. Further
in the common case where only the transmission costs are minimized, we show
that our algorithm produces a solution that has cost within a factor of 2 of
optimal. We also implement our algorithm and evaluate it on realistic traces
derived from Akamai's live streaming network. Our empirical results show that
our algorithm can be used to efficiently construct large-scale overlay networks
in practice with near-optimal cost
- …