10 research outputs found

    The Role of Dimension in the Online Chasing Problem

    Full text link
    Let (X,d)(X, d) be a metric space and C2X\mathcal{C} \subseteq 2^X -- a collection of special objects. In the (X,d,C)(X,d,\mathcal{C})-chasing problem, an online player receives a sequence of online requests {Bt}t=1TC\{B_t\}_{t=1}^T \subseteq \mathcal{C} and responds with a trajectory {xt}t=1T\{x_t\}_{t=1}^T such that xtBtx_t \in B_t. This response incurs a movement cost t=1Td(xt,xt1)\sum_{t=1}^T d(x_t, x_{t-1}), and the online player strives to minimize the competitive ratio -- the worst case ratio over all input sequences between the online movement cost and the optimal movement cost in hindsight. Under this setup, we call the (X,d,C)(X,d,\mathcal{C})-chasing problem chaseable\textit{chaseable} if there exists an online algorithm with finite competitive ratio. In the case of Convex Body Chasing (CBC) over real normed vector spaces, (Bubeck et al. 2019) proved the chaseability of the problem. Furthermore, in the vector space setting, the dimension of the ambient space appears to be the factor controlling the size of the competitive ratio. Indeed, recently, (Sellke 2020) provided a dd-competitive online algorithm over arbitrary real normed vector spaces (Rd,)(\mathbb{R}^d, ||\cdot||), and we will shortly present a general strategy for obtaining novel lower bounds of the form Ω(dc),c>0\Omega(d^c), \enspace c > 0, for CBC in the same setting. In this paper, we also prove that the doubling\textit{doubling} and Assouad\textit{Assouad} dimensions of a metric space exert no control on the hardness of ball chasing over the said metric space. More specifically, we show that for any large enough ρR\rho \in \mathbb{R}, there exists a metric space (X,d)(X,d) of doubling dimension Θ(ρ)\Theta(\rho) and Assouad dimension ρ\rho such that no online selector can achieve a finite competitive ratio in the general ball chasing regime

    Online Optimization with Memory and Competitive Control

    Get PDF
    This paper presents competitive algorithms for a novel class of online optimization problems with memory. We consider a setting where the learner seeks to minimize the sum of a hitting cost and a switching cost that depends on the previous p decisions. This setting generalizes Smoothed Online Convex Optimization. The proposed approach, Optimistic Regularized Online Balanced Descent, achieves a constant, dimension-free competitive ratio. Further, we show a connection between online optimization with memory and online control with adversarial disturbances. This connection, in turn, leads to a new constant-competitive policy for a rich class of online control problems

    A PTAS for Euclidean TSP with Hyperplane Neighborhoods

    Full text link
    In the Traveling Salesperson Problem with Neighborhoods (TSPN), we are given a collection of geometric regions in some space. The goal is to output a tour of minimum length that visits at least one point in each region. Even in the Euclidean plane, TSPN is known to be APX-hard, which gives rise to studying more tractable special cases of the problem. In this paper, we focus on the fundamental special case of regions that are hyperplanes in the dd-dimensional Euclidean space. This case contrasts the much-better understood case of so-called fat regions. While for d=2d=2 an exact algorithm with running time O(n5)O(n^5) is known, settling the exact approximability of the problem for d=3d=3 has been repeatedly posed as an open question. To date, only an approximation algorithm with guarantee exponential in dd is known, and NP-hardness remains open. For arbitrary fixed dd, we develop a Polynomial Time Approximation Scheme (PTAS) that works for both the tour and path version of the problem. Our algorithm is based on approximating the convex hull of the optimal tour by a convex polytope of bounded complexity. Such polytopes are represented as solutions of a sophisticated LP formulation, which we combine with the enumeration of crucial properties of the tour. As the approximation guarantee approaches 11, our scheme adjusts the complexity of the considered polytopes accordingly. In the analysis of our approximation scheme, we show that our search space includes a sufficiently good approximation of the optimum. To do so, we develop a novel and general sparsification technique to transform an arbitrary convex polytope into one with a constant number of vertices and, in turn, into one of bounded complexity in the above sense. Hereby, we maintain important properties of the polytope

    Online Convex Optimization and Predictive Control in Dynamic Environments

    Get PDF
    We study the performance of an online learner under a framework in which it receives partial information from a dynamic, and potentially adversarial, environment at discrete time steps. The goal of this learner is to minimize the sum of costs incurred at each time step and its performance is compared against an offline learner with perfect information of the environment. We are interested in the scenarios where, in addition to some costs at each time step, there are some penalties or constraints on the learner's successive decisions. In the first part of this thesis, we investigate a Smoothed Online Convex Optimization (SOCO) setting where the cost functions are strongly convex and the learner pays a squared ℓ₂ movement cost for changing decision between time steps. We shall present a lower bound on the competitive ratio of any online learner in this setting and show a series of algorithmic ideas that lead to an optimal algorithm matching this lower bound. And in the second part of this thesis, we investigate a predictive control problem where the costs are well-conditioned and the learner's decisions are constrained by a linear time-varying (LTV) dynamics but has exact prediction on the dynamics, costs and disturbances for the next k time steps. We shall discuss a novel reduction from this LTV control problem to the aforementioned SOCO problem and use this to achieve a dynamic regret of O(λkT) and a competitive ratio of 1 + O(λk) for some positive constant λ &#60; 1.</p
    corecore