10 research outputs found
The Role of Dimension in the Online Chasing Problem
Let be a metric space and -- a
collection of special objects. In the -chasing problem, an
online player receives a sequence of online requests and responds with a trajectory such that . This response incurs a movement cost ,
and the online player strives to minimize the competitive ratio -- the worst
case ratio over all input sequences between the online movement cost and the
optimal movement cost in hindsight. Under this setup, we call the
-chasing problem if there exists an
online algorithm with finite competitive ratio. In the case of Convex Body
Chasing (CBC) over real normed vector spaces, (Bubeck et al. 2019) proved the
chaseability of the problem. Furthermore, in the vector space setting, the
dimension of the ambient space appears to be the factor controlling the size of
the competitive ratio. Indeed, recently, (Sellke 2020) provided a
competitive online algorithm over arbitrary real normed vector spaces
, and we will shortly present a general strategy for
obtaining novel lower bounds of the form , for CBC
in the same setting. In this paper, we also prove that the
and dimensions of a metric space exert no control on the
hardness of ball chasing over the said metric space. More specifically, we show
that for any large enough , there exists a metric space
of doubling dimension and Assouad dimension such
that no online selector can achieve a finite competitive ratio in the general
ball chasing regime
Online Optimization with Memory and Competitive Control
This paper presents competitive algorithms for a novel class of online optimization problems with memory. We consider a setting where the learner seeks to minimize the sum of a hitting cost and a switching cost that depends on the previous p decisions. This setting generalizes Smoothed Online Convex Optimization. The proposed approach, Optimistic Regularized Online Balanced Descent, achieves a constant, dimension-free competitive ratio. Further, we show a connection between online optimization with memory and online control with adversarial disturbances. This connection, in turn, leads to a new constant-competitive policy for a rich class of online control problems
A PTAS for Euclidean TSP with Hyperplane Neighborhoods
In the Traveling Salesperson Problem with Neighborhoods (TSPN), we are given
a collection of geometric regions in some space. The goal is to output a tour
of minimum length that visits at least one point in each region. Even in the
Euclidean plane, TSPN is known to be APX-hard, which gives rise to studying
more tractable special cases of the problem. In this paper, we focus on the
fundamental special case of regions that are hyperplanes in the -dimensional
Euclidean space. This case contrasts the much-better understood case of
so-called fat regions.
While for an exact algorithm with running time is known,
settling the exact approximability of the problem for has been repeatedly
posed as an open question. To date, only an approximation algorithm with
guarantee exponential in is known, and NP-hardness remains open.
For arbitrary fixed , we develop a Polynomial Time Approximation Scheme
(PTAS) that works for both the tour and path version of the problem. Our
algorithm is based on approximating the convex hull of the optimal tour by a
convex polytope of bounded complexity. Such polytopes are represented as
solutions of a sophisticated LP formulation, which we combine with the
enumeration of crucial properties of the tour. As the approximation guarantee
approaches , our scheme adjusts the complexity of the considered polytopes
accordingly.
In the analysis of our approximation scheme, we show that our search space
includes a sufficiently good approximation of the optimum. To do so, we develop
a novel and general sparsification technique to transform an arbitrary convex
polytope into one with a constant number of vertices and, in turn, into one of
bounded complexity in the above sense. Hereby, we maintain important properties
of the polytope
Online Convex Optimization and Predictive Control in Dynamic Environments
We study the performance of an online learner under a framework in which it receives partial information from a dynamic, and potentially adversarial, environment at discrete time steps. The goal of this learner is to minimize the sum of costs incurred at each time step and its performance is compared against an offline learner with perfect information of the environment.
We are interested in the scenarios where, in addition to some costs at each time step, there are some penalties or constraints on the learner's successive decisions. In the first part of this thesis, we investigate a Smoothed Online Convex Optimization (SOCO) setting where the cost functions are strongly convex and the learner pays a squared ℓ₂ movement cost for changing decision between time steps. We shall present a lower bound on the competitive ratio of any online learner in this setting and show a series of algorithmic ideas that lead to an optimal algorithm matching this lower bound. And in the second part of this thesis, we investigate a predictive control problem where the costs are well-conditioned and the learner's decisions are constrained by a linear time-varying (LTV) dynamics but has exact prediction on the dynamics, costs and disturbances for the next k time steps. We shall discuss a novel reduction from this LTV control problem to the aforementioned SOCO problem and use this to achieve a dynamic regret of O(λkT) and a competitive ratio of 1 + O(λk) for some positive constant λ < 1.</p