13,735 research outputs found
Sticky Seeding in Discrete-Time Reversible-Threshold Networks
When nodes can repeatedly update their behavior (as in agent-based models
from computational social science or repeated-game play settings) the problem
of optimal network seeding becomes very complex. For a popular
spreading-phenomena model of binary-behavior updating based on thresholds of
adoption among neighbors, we consider several planning problems in the design
of \textit{Sticky Interventions}: when adoption decisions are reversible, the
planner aims to find a Seed Set where temporary intervention leads to long-term
behavior change. We prove that completely converting a network at minimum cost
is -hard to approximate and that maximizing conversion
subject to a budget is -hard to approximate. Optimization
heuristics which rely on many objective function evaluations may still be
practical, particularly in relatively-sparse networks: we prove that the
long-term impact of a Seed Set can be evaluated in operations. For a
more descriptive model variant in which some neighbors may be more influential
than others, we show that under integer edge weights from
objective function evaluation requires only operations. These
operation bounds are based on improvements we give for bounds on
time-steps-to-convergence under discrete-time reversible-threshold updates in
networks.Comment: 19 pages, 2 figure
Complexity of Discrete Energy Minimization Problems
Discrete energy minimization is widely-used in computer vision and machine
learning for problems such as MAP inference in graphical models. The problem,
in general, is notoriously intractable, and finding the global optimal solution
is known to be NP-hard. However, is it possible to approximate this problem
with a reasonable ratio bound on the solution quality in polynomial time? We
show in this paper that the answer is no. Specifically, we show that general
energy minimization, even in the 2-label pairwise case, and planar energy
minimization with three or more labels are exp-APX-complete. This finding rules
out the existence of any approximation algorithm with a sub-exponential
approximation ratio in the input size for these two problems, including
constant factor approximations. Moreover, we collect and review the
computational complexity of several subclass problems and arrange them on a
complexity scale consisting of three major complexity classes -- PO, APX, and
exp-APX, corresponding to problems that are solvable, approximable, and
inapproximable in polynomial time. Problems in the first two complexity classes
can serve as alternative tractable formulations to the inapproximable ones.
This paper can help vision researchers to select an appropriate model for an
application or guide them in designing new algorithms.Comment: ECCV'16 accepte
On the Complexity of Nash Equilibria of Action-Graph Games
We consider the problem of computing Nash Equilibria of action-graph games
(AGGs). AGGs, introduced by Bhat and Leyton-Brown, is a succinct representation
of games that encapsulates both "local" dependencies as in graphical games, and
partial indifference to other agents' identities as in anonymous games, which
occur in many natural settings. This is achieved by specifying a graph on the
set of actions, so that the payoff of an agent for selecting a strategy depends
only on the number of agents playing each of the neighboring strategies in the
action graph. We present a Polynomial Time Approximation Scheme for computing
mixed Nash equilibria of AGGs with constant treewidth and a constant number of
agent types (and an arbitrary number of strategies), together with hardness
results for the cases when either the treewidth or the number of agent types is
unconstrained. In particular, we show that even if the action graph is a tree,
but the number of agent-types is unconstrained, it is NP-complete to decide the
existence of a pure-strategy Nash equilibrium and PPAD-complete to compute a
mixed Nash equilibrium (even an approximate one); similarly for symmetric AGGs
(all agents belong to a single type), if we allow arbitrary treewidth. These
hardness results suggest that, in some sense, our PTAS is as strong of a
positive result as one can expect
Incremental Recompilation of Knowledge
Approximating a general formula from above and below by Horn formulas (its
Horn envelope and Horn core, respectively) was proposed by Selman and Kautz
(1991, 1996) as a form of ``knowledge compilation,'' supporting rapid
approximate reasoning; on the negative side, this scheme is static in that it
supports no updates, and has certain complexity drawbacks pointed out by
Kavvadias, Papadimitriou and Sideri (1993). On the other hand, the many
frameworks and schemes proposed in the literature for theory update and
revision are plagued by serious complexity-theoretic impediments, even in the
Horn case, as was pointed out by Eiter and Gottlob (1992), and is further
demonstrated in the present paper. More fundamentally, these schemes are not
inductive, in that they may lose in a single update any positive properties of
the represented sets of formulas (small size, Horn structure, etc.). In this
paper we propose a new scheme, incremental recompilation, which combines Horn
approximation and model-based updates; this scheme is inductive and very
efficient, free of the problems facing its constituents. A set of formulas is
represented by an upper and lower Horn approximation. To update, we replace the
upper Horn formula by the Horn envelope of its minimum-change update, and
similarly the lower one by the Horn core of its update; the key fact which
enables this scheme is that Horn envelopes and cores are easy to compute when
the underlying formula is the result of a minimum-change update of a Horn
formula by a clause. We conjecture that efficient algorithms are possible for
more complex updates.Comment: See http://www.jair.org/ for any accompanying file
- …