43,619 research outputs found
Risk-Averse Matchings over Uncertain Graph Databases
A large number of applications such as querying sensor networks, and
analyzing protein-protein interaction (PPI) networks, rely on mining uncertain
graph and hypergraph databases. In this work we study the following problem:
given an uncertain, weighted (hyper)graph, how can we efficiently find a
(hyper)matching with high expected reward, and low risk?
This problem naturally arises in the context of several important
applications, such as online dating, kidney exchanges, and team formation. We
introduce a novel formulation for finding matchings with maximum expected
reward and bounded risk under a general model of uncertain weighted
(hyper)graphs that we introduce in this work. Our model generalizes
probabilistic models used in prior work, and captures both continuous and
discrete probability distributions, thus allowing to handle privacy related
applications that inject appropriately distributed noise to (hyper)edge
weights. Given that our optimization problem is NP-hard, we turn our attention
to designing efficient approximation algorithms. For the case of uncertain
weighted graphs, we provide a -approximation algorithm, and a
-approximation algorithm with near optimal run time. For the case
of uncertain weighted hypergraphs, we provide a
-approximation algorithm, where is the rank of the
hypergraph (i.e., any hyperedge includes at most nodes), that runs in
almost (modulo log factors) linear time.
We complement our theoretical results by testing our approximation algorithms
on a wide variety of synthetic experiments, where we observe in a controlled
setting interesting findings on the trade-off between reward, and risk. We also
provide an application of our formulation for providing recommendations of
teams that are likely to collaborate, and have high impact.Comment: 25 page
On-line Non-stationary Inventory Control using Champion Competition
The commonly adopted assumption of stationary demands cannot actually reflect
fluctuating demands and will weaken solution effectiveness in real practice. We
consider an On-line Non-stationary Inventory Control Problem (ONICP), in which
no specific assumption is imposed on demands and their probability
distributions are allowed to vary over periods and correlate with each other.
The nature of non-stationary demands disables the optimality of static (s,S)
policies and the applicability of its corresponding algorithms. The ONICP
becomes computationally intractable by using general Simulation-based
Optimization (SO) methods, especially under an on-line decision-making
environment with no luxury of time and computing resources to afford the huge
computational burden. We develop a new SO method, termed "Champion Competition"
(CC), which provides a different framework and bypasses the time-consuming
sample average routine adopted in general SO methods. An alternate type of
optimal solution, termed "Champion Solution", is pursued in the CC framework,
which coincides the traditional optimality sense under certain conditions and
serves as a near-optimal solution for general cases. The CC can reduce the
complexity of general SO methods by orders of magnitude in solving a class of
SO problems, including the ONICP. A polynomial algorithm, termed "Renewal Cycle
Algorithm" (RCA), is further developed to fulfill an important procedure of the
CC framework in solving this ONICP. Numerical examples are included to
demonstrate the performance of the CC framework with the RCA embedded.Comment: I just identified a flaw in the paper. It may take me some time to
fix it. I would like to withdraw the article and update it once I finished.
Thank you for your kind suppor
Correlation Decay in Random Decision Networks
We consider a decision network on an undirected graph in which each node
corresponds to a decision variable, and each node and edge of the graph is
associated with a reward function whose value depends only on the variables of
the corresponding nodes. The goal is to construct a decision vector which
maximizes the total reward. This decision problem encompasses a variety of
models, including maximum-likelihood inference in graphical models (Markov
Random Fields), combinatorial optimization on graphs, economic team theory and
statistical physics. The network is endowed with a probabilistic structure in
which costs are sampled from a distribution. Our aim is to identify sufficient
conditions to guarantee average-case polynomiality of the underlying
optimization problem. We construct a new decentralized algorithm called Cavity
Expansion and establish its theoretical performance for a variety of models.
Specifically, for certain classes of models we prove that our algorithm is able
to find near optimal solutions with high probability in a decentralized way.
The success of the algorithm is based on the network exhibiting a correlation
decay (long-range independence) property. Our results have the following
surprising implications in the area of average case complexity of algorithms.
Finding the largest independent (stable) set of a graph is a well known NP-hard
optimization problem for which no polynomial time approximation scheme is
possible even for graphs with largest connectivity equal to three, unless P=NP.
We show that the closely related maximum weighted independent set problem for
the same class of graphs admits a PTAS when the weights are i.i.d. with the
exponential distribution. Namely, randomization of the reward function turns an
NP-hard problem into a tractable one
- …