339 research outputs found

    Online Non-Monotone DR-submodular Maximization

    Full text link
    In this paper, we study fundamental problems of maximizing DR-submodular continuous functions that have real-world applications in the domain of machine learning, economics, operations research and communication systems. It captures a subclass of non-convex optimization that provides both theoretical and practical guarantees. Here, we focus on minimizing regret for online arriving non-monotone DR-submodular functions over different types of convex sets: hypercube, down-closed and general convex sets. First, we present an online algorithm that achieves a 1/e1/e-approximation ratio with the regret of O(T2/3)O(T^{2/3}) for maximizing DR-submodular functions over any down-closed convex set. Note that, the approximation ratio of 1/e1/e matches the best-known guarantee for the offline version of the problem. Moreover, when the convex set is the hypercube, we propose a tight 1/2-approximation algorithm with regret bound of O(T)O(\sqrt{T}). Next, we give an online algorithm that achieves an approximation guarantee (depending on the search space) for the problem of maximizing non-monotone continuous DR-submodular functions over a \emph{general} convex set (not necessarily down-closed). To best of our knowledge, no prior algorithm with approximation guarantee was known for non-monotone DR-submodular maximization in the online setting. Finally we run experiments to verify the performance of our algorithms on problems arising in machine learning domain with the real-world datasets
    • …
    corecore