21,195 research outputs found
Network Utility Maximization under Maximum Delay Constraints and Throughput Requirements
We consider the problem of maximizing aggregate user utilities over a
multi-hop network, subject to link capacity constraints, maximum end-to-end
delay constraints, and user throughput requirements. A user's utility is a
concave function of the achieved throughput or the experienced maximum delay.
The problem is important for supporting real-time multimedia traffic, and is
uniquely challenging due to the need of simultaneously considering maximum
delay constraints and throughput requirements. We first show that it is
NP-complete either (i) to construct a feasible solution strictly meeting all
constraints, or (ii) to obtain an optimal solution after we relax maximum delay
constraints or throughput requirements up to constant ratios. We then develop a
polynomial-time approximation algorithm named PASS. The design of PASS
leverages a novel understanding between non-convex maximum-delay-aware problems
and their convex average-delay-aware counterparts, which can be of independent
interest and suggest a new avenue for solving maximum-delay-aware network
optimization problems. Under realistic conditions, PASS achieves constant or
problem-dependent approximation ratios, at the cost of violating maximum delay
constraints or throughput requirements by up to constant or problem-dependent
ratios. PASS is practically useful since the conditions for PASS are satisfied
in many popular application scenarios. We empirically evaluate PASS using
extensive simulations of supporting video-conferencing traffic across Amazon
EC2 datacenters. Compared to existing algorithms and a conceivable baseline,
PASS obtains up to improvement of utilities, by meeting the throughput
requirements but relaxing the maximum delay constraints that are acceptable for
practical video conferencing applications
Maximizing Welfare in Social Networks under a Utility Driven Influence Diffusion Model
Motivated by applications such as viral marketing, the problem of influence
maximization (IM) has been extensively studied in the literature. The goal is
to select a small number of users to adopt an item such that it results in a
large cascade of adoptions by others. Existing works have three key
limitations. (1) They do not account for economic considerations of a user in
buying/adopting items. (2) Most studies on multiple items focus on competition,
with complementary items receiving limited attention. (3) For the network
owner, maximizing social welfare is important to ensure customer loyalty, which
is not addressed in prior work in the IM literature. In this paper, we address
all three limitations and propose a novel model called UIC that combines
utility-driven item adoption with influence propagation over networks. Focusing
on the mutually complementary setting, we formulate the problem of social
welfare maximization in this novel setting. We show that while the objective
function is neither submodular nor supermodular, surprisingly a simple greedy
allocation algorithm achieves a factor of of the optimum
expected social welfare. We develop \textsf{bundleGRD}, a scalable version of
this approximation algorithm, and demonstrate, with comprehensive experiments
on real and synthetic datasets, that it significantly outperforms all
baselines.Comment: 33 page
The Price of Information in Combinatorial Optimization
Consider a network design application where we wish to lay down a
minimum-cost spanning tree in a given graph; however, we only have stochastic
information about the edge costs. To learn the precise cost of any edge, we
have to conduct a study that incurs a price. Our goal is to find a spanning
tree while minimizing the disutility, which is the sum of the tree cost and the
total price that we spend on the studies. In a different application, each edge
gives a stochastic reward value. Our goal is to find a spanning tree while
maximizing the utility, which is the tree reward minus the prices that we pay.
Situations such as the above two often arise in practice where we wish to
find a good solution to an optimization problem, but we start with only some
partial knowledge about the parameters of the problem. The missing information
can be found only after paying a probing price, which we call the price of
information. What strategy should we adopt to optimize our expected
utility/disutility?
A classical example of the above setting is Weitzman's "Pandora's box"
problem where we are given probability distributions on values of
independent random variables. The goal is to choose a single variable with a
large value, but we can find the actual outcomes only after paying a price. Our
work is a generalization of this model to other combinatorial optimization
problems such as matching, set cover, facility location, and prize-collecting
Steiner tree. We give a technique that reduces such problems to their non-price
counterparts, and use it to design exact/approximation algorithms to optimize
our utility/disutility. Our techniques extend to situations where there are
additional constraints on what parameters can be probed or when we can
simultaneously probe a subset of the parameters.Comment: SODA 201
On the Hardness of Signaling
There has been a recent surge of interest in the role of information in
strategic interactions. Much of this work seeks to understand how the realized
equilibrium of a game is influenced by uncertainty in the environment and the
information available to players in the game. Lurking beneath this literature
is a fundamental, yet largely unexplored, algorithmic question: how should a
"market maker" who is privy to additional information, and equipped with a
specified objective, inform the players in the game? This is an informational
analogue of the mechanism design question, and views the information structure
of a game as a mathematical object to be designed, rather than an exogenous
variable.
We initiate a complexity-theoretic examination of the design of optimal
information structures in general Bayesian games, a task often referred to as
signaling. We focus on one of the simplest instantiations of the signaling
question: Bayesian zero-sum games, and a principal who must choose an
information structure maximizing the equilibrium payoff of one of the players.
In this setting, we show that optimal signaling is computationally intractable,
and in some cases hard to approximate, assuming that it is hard to recover a
planted clique from an Erdos-Renyi random graph. This is despite the fact that
equilibria in these games are computable in polynomial time, and therefore
suggests that the hardness of optimal signaling is a distinct phenomenon from
the hardness of equilibrium computation. Necessitated by the non-local nature
of information structures, en-route to our results we prove an "amplification
lemma" for the planted clique problem which may be of independent interest
- …