2 research outputs found
Optimal approximation for unconstrained non-submodular minimization
Submodular function minimization is a well studied problem; existing
algorithms solve it exactly or up to arbitrary accuracy. However, in many
applications, the objective function is not exactly submodular. No theoretical
guarantees exist in this case. While submodular minimization algorithms rely on
intricate connections between submodularity and convexity, we show that these
relations can be extended sufficiently to obtain approximation guarantees for
non-submodular minimization. In particular, we prove how a projected
subgradient method can perform well even for certain non-submodular functions.
This includes important examples, such as objectives for structured sparse
learning and variance reduction in Bayesian optimization. We also extend this
result to noisy function evaluations. Our algorithm works in the value oracle
model. We prove that in this model, the approximation result we obtain is the
best possible with a subexponential number of queries
On Maximization of Weakly Modular Functions: Guarantees of Multi-stage Algorithms, Tractability, and Hardness
Maximization of {\it non-submodular} functions appears in various scenarios,
and many previous works studied it based on some measures that quantify the
closeness to being submodular. On the other hand, many practical non-submodular
functions are actually close to being {\it modular}, which has been utilized in
few studies. In this paper, we study cardinality-constrained maximization of
{\it weakly modular} functions, whose closeness to being modular is measured by
{\it submodularity} and {\it supermodularity ratios}, and reveal what we can
and cannot do by using the weak modularity. We first show that guarantees of
multi-stage algorithms can be proved with the weak modularity, which generalize
and improve some existing results, and experiments confirm their effectiveness.
We then show that weakly modular maximization is {\it fixed-parameter
tractable} under certain conditions; as a byproduct, we provide a new
time--accuracy trade-off for -constrained minimization. We finally
prove that, even if objective functions are weakly modular, no polynomial-time
algorithms can improve the existing approximation guarantees achieved by the
greedy algorithm