10 research outputs found

    Nested convex bodies are chaseable

    Get PDF
    In the Convex Body Chasing problem, we are given an initial point v0 2 Rd and an online sequence of n convex bodies F1; : : : ; Fn. When we receive Fi, we are required to move inside Fi. Our goal is to minimize the total distance traveled. This fundamental online problem was first studied by Friedman and Linial (DCG 1993). They proved an ( p d) lower bound on the competitive ratio, and conjectured that a competitive ratio depending only on d is possible. However, despite much interest in the problem, the conjecture remains wide open. We consider the setting in which the convex bodies are nested: F1 : : : Fn. The nested setting is closely related to extending the online LP framework of Buchbinder and Naor (ESA 2005) to arbitrary linear constraints. Moreover, this setting retains much of the difficulty of the general setting and captures an essential obstacle in resolving Friedman and Linial's conjecture. In this work, we give a f(d)competitive algorithm for chasing nested convex bodies in Rd

    A PTAS for Euclidean TSP with Hyperplane Neighborhoods

    Full text link
    In the Traveling Salesperson Problem with Neighborhoods (TSPN), we are given a collection of geometric regions in some space. The goal is to output a tour of minimum length that visits at least one point in each region. Even in the Euclidean plane, TSPN is known to be APX-hard, which gives rise to studying more tractable special cases of the problem. In this paper, we focus on the fundamental special case of regions that are hyperplanes in the dd-dimensional Euclidean space. This case contrasts the much-better understood case of so-called fat regions. While for d=2d=2 an exact algorithm with running time O(n5)O(n^5) is known, settling the exact approximability of the problem for d=3d=3 has been repeatedly posed as an open question. To date, only an approximation algorithm with guarantee exponential in dd is known, and NP-hardness remains open. For arbitrary fixed dd, we develop a Polynomial Time Approximation Scheme (PTAS) that works for both the tour and path version of the problem. Our algorithm is based on approximating the convex hull of the optimal tour by a convex polytope of bounded complexity. Such polytopes are represented as solutions of a sophisticated LP formulation, which we combine with the enumeration of crucial properties of the tour. As the approximation guarantee approaches 11, our scheme adjusts the complexity of the considered polytopes accordingly. In the analysis of our approximation scheme, we show that our search space includes a sufficiently good approximation of the optimum. To do so, we develop a novel and general sparsification technique to transform an arbitrary convex polytope into one with a constant number of vertices and, in turn, into one of bounded complexity in the above sense. Hereby, we maintain important properties of the polytope

    Proximal Algorithms for Smoothed Online Convex Optimization with Predictions

    Full text link
    We consider a smoothed online convex optimization (SOCO) problem with predictions, where the learner has access to a finite lookahead window of time-varying stage costs, but suffers a switching cost for changing its actions at each stage. Based on the Alternating Proximal Gradient Descent (APGD) framework, we develop Receding Horizon Alternating Proximal Descent (RHAPD) for proximable, non-smooth and strongly convex stage costs, and RHAPD-Smooth (RHAPD-S) for non-proximable, smooth and strongly convex stage costs. In addition to outperforming gradient descent-based algorithms, while maintaining a comparable runtime complexity, our proposed algorithms also allow us to solve a wider range of problems. We provide theoretical upper bounds on the dynamic regret achieved by the proposed algorithms, which decay exponentially with the length of the lookahead window. The performance of the presented algorithms is empirically demonstrated via numerical experiments on non-smooth regression, dynamic trajectory tracking, and economic power dispatch problems.Comment: 28 pages, 13 figure

    Online Optimization with Predictions and Non-convex Losses

    Get PDF
    We study online optimization in a setting where an online learner seeks to optimize a per-round hitting cost, which may be non-convex, while incurring a movement cost when changing actions between rounds. We ask: under what general conditions is it possible for an online learner to leverage predictions of future cost functions in order to achieve near-optimal costs? Prior work has provided near-optimal online algorithms for specific combinations of assumptions about hitting and switching costs, but no general results are known. In this work, we give two general sufficient conditions that specify a relationship between the hitting and movement costs which guarantees that a new algorithm, Synchronized Fixed Horizon Control (SFHC), achieves a 1+O(1/w) competitive ratio, where w is the number of predictions available to the learner. Our conditions do not require the cost functions to be convex, and we also derive competitive ratio results for non-convex hitting and movement costs. Our results provide the first constant, dimension-free competitive ratio for online non-convex optimization with movement costs. We also give an example of a natural problem, Convex Body Chasing (CBC), where the sufficient conditions are not satisfied and prove that no online algorithm can have a competitive ratio that converges to 1
    corecore