15 research outputs found

    Improved Projection-free Online Continuous Submodular Maximization

    Full text link
    We investigate the problem of online learning with monotone and continuous DR-submodular reward functions, which has received great attention recently. To efficiently handle this problem, especially in the case with complicated decision sets, previous studies have proposed an efficient projection-free algorithm called Mono-Frank-Wolfe (Mono-FW) using O(T)O(T) gradient evaluations and linear optimization steps in total. However, it only attains a (1−1/e)(1-1/e)-regret bound of O(T4/5)O(T^{4/5}). In this paper, we propose an improved projection-free algorithm, namely POBGA, which reduces the regret bound to O(T3/4)O(T^{3/4}) while keeping the same computational complexity as Mono-FW. Instead of modifying Mono-FW, our key idea is to make a novel combination of a projection-based algorithm called online boosting gradient ascent, an infeasible projection technique, and a blocking technique. Furthermore, we consider the decentralized setting and develop a variant of POBGA, which not only reduces the current best regret bound of efficient projection-free algorithms for this setting from O(T4/5)O(T^{4/5}) to O(T3/4)O(T^{3/4}), but also reduces the total communication complexity from O(T)O(T) to O(T)O(\sqrt{T})

    Online Non-Monotone DR-submodular Maximization

    Full text link
    In this paper, we study fundamental problems of maximizing DR-submodular continuous functions that have real-world applications in the domain of machine learning, economics, operations research and communication systems. It captures a subclass of non-convex optimization that provides both theoretical and practical guarantees. Here, we focus on minimizing regret for online arriving non-monotone DR-submodular functions over different types of convex sets: hypercube, down-closed and general convex sets. First, we present an online algorithm that achieves a 1/e1/e-approximation ratio with the regret of O(T2/3)O(T^{2/3}) for maximizing DR-submodular functions over any down-closed convex set. Note that, the approximation ratio of 1/e1/e matches the best-known guarantee for the offline version of the problem. Moreover, when the convex set is the hypercube, we propose a tight 1/2-approximation algorithm with regret bound of O(T)O(\sqrt{T}). Next, we give an online algorithm that achieves an approximation guarantee (depending on the search space) for the problem of maximizing non-monotone continuous DR-submodular functions over a \emph{general} convex set (not necessarily down-closed). To best of our knowledge, no prior algorithm with approximation guarantee was known for non-monotone DR-submodular maximization in the online setting. Finally we run experiments to verify the performance of our algorithms on problems arising in machine learning domain with the real-world datasets
    corecore