15 research outputs found
Improved Projection-free Online Continuous Submodular Maximization
We investigate the problem of online learning with monotone and continuous
DR-submodular reward functions, which has received great attention recently. To
efficiently handle this problem, especially in the case with complicated
decision sets, previous studies have proposed an efficient projection-free
algorithm called Mono-Frank-Wolfe (Mono-FW) using gradient evaluations
and linear optimization steps in total. However, it only attains a
-regret bound of . In this paper, we propose an improved
projection-free algorithm, namely POBGA, which reduces the regret bound to
while keeping the same computational complexity as Mono-FW.
Instead of modifying Mono-FW, our key idea is to make a novel combination of a
projection-based algorithm called online boosting gradient ascent, an
infeasible projection technique, and a blocking technique. Furthermore, we
consider the decentralized setting and develop a variant of POBGA, which not
only reduces the current best regret bound of efficient projection-free
algorithms for this setting from to , but also reduces
the total communication complexity from to
Online Non-Monotone DR-submodular Maximization
In this paper, we study fundamental problems of maximizing DR-submodular
continuous functions that have real-world applications in the domain of machine
learning, economics, operations research and communication systems. It captures
a subclass of non-convex optimization that provides both theoretical and
practical guarantees. Here, we focus on minimizing regret for online arriving
non-monotone DR-submodular functions over different types of convex sets:
hypercube, down-closed and general convex sets.
First, we present an online algorithm that achieves a -approximation
ratio with the regret of for maximizing DR-submodular functions
over any down-closed convex set. Note that, the approximation ratio of
matches the best-known guarantee for the offline version of the problem.
Moreover, when the convex set is the hypercube, we propose a tight
1/2-approximation algorithm with regret bound of . Next, we give
an online algorithm that achieves an approximation guarantee (depending on the
search space) for the problem of maximizing non-monotone continuous
DR-submodular functions over a \emph{general} convex set (not necessarily
down-closed). To best of our knowledge, no prior algorithm with approximation
guarantee was known for non-monotone DR-submodular maximization in the online
setting. Finally we run experiments to verify the performance of our algorithms
on problems arising in machine learning domain with the real-world datasets