8 research outputs found

    Online optimization with switching cost

    Get PDF
    We consider algorithms for "smoothed online convex optimization (SOCO)" problems. SOCO is a variant of the class of "online convex optimization (OCO)" problems that is strongly related to the class of "metrical task systems", each of which have been studied extensively. Prior literature on these problems has focused on two performance metrics: regret and competitive ratio. There exist known algorithms with sublinear regret and known algorithms with constant competitive ratios; however no known algorithms achieve both. In this paper, we show that this is due to a fundamental incompatibility between regret and the competitive ratio -- no algorithm (deterministic or randomized) can achieve sublinear regret and a constant competitive ratio, even in the case when the objective functions are linear

    Online optimization with switching cost

    Get PDF
    We consider algorithms for "smoothed online convex optimization (SOCO)" problems. SOCO is a variant of the class of "online convex optimization (OCO)" problems that is strongly related to the class of "metrical task systems", each of which have been studied extensively. Prior literature on these problems has focused on two performance metrics: regret and competitive ratio. There exist known algorithms with sublinear regret and known algorithms with constant competitive ratios; however no known algorithms achieve both. In this paper, we show that this is due to a fundamental incompatibility between regret and the competitive ratio -- no algorithm (deterministic or randomized) can achieve sublinear regret and a constant competitive ratio, even in the case when the objective functions are linear

    A Tale of Two Metrics: Simultaneous Bounds on Competitiveness and Regret

    Get PDF
    We consider algorithms for “smoothed online convex optimization” (SOCO) problems, which are a hybrid between online convex optimization (OCO) and metrical task system (MTS) problems. Historically, the performance metric for OCO was regret and that for MTS was competitive ratio (CR). There are algorithms with either sublinear regret or constant CR, but no known algorithm achieves both simultaneously. We show that this is a fundamental limitation – no algorithm (deterministic or randomized) can achieve sublinear regret and a constant CR, even when the objective functions are linear and the decision space is one dimensional. However, we present an algorithm that, for the important one dimensional case, provides sublinear regret and a CR that grows arbitrarily slowly

    Online Convex Optimization Using Predictions

    Get PDF
    Making use of predictions is a crucial, but under-explored, area of online algorithms. This paper studies a class of online optimization problems where we have external noisy predictions available. We propose a stochastic prediction error model that generalizes prior models in the learning and stochastic control communities, incorporates correlation among prediction errors, and captures the fact that predictions improve as time passes. We prove that achieving sublinear regret and constant competitive ratio for online algorithms requires the use of an unbounded prediction window in adversarial settings, but that under more realistic stochastic prediction error models it is possible to use Averaging Fixed Horizon Control (AFHC) to simultaneously achieve sublinear regret and constant competitive ratio in expectation using only a constant-sized prediction window. Furthermore, we show that the performance of AFHC is tightly concentrated around its mean

    On-line algorithms for the K-server problem and its variants.

    Get PDF
    by Chi-ming Wat.Thesis (M.Phil.)--Chinese University of Hong Kong, 1995.Includes bibliographical references (leaves 77-82).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Performance analysis of on-line algorithms --- p.2Chapter 1.2 --- Randomized algorithms --- p.4Chapter 1.3 --- Types of adversaries --- p.5Chapter 1.4 --- Overview of the results --- p.6Chapter 2 --- The k-server problem --- p.8Chapter 2.1 --- Introduction --- p.8Chapter 2.2 --- Related Work --- p.9Chapter 2.3 --- The Evolution of Work Function Algorithm --- p.12Chapter 2.4 --- Definitions --- p.16Chapter 2.5 --- The Work Function Algorithm --- p.18Chapter 2.6 --- The Competitive Analysis --- p.20Chapter 3 --- The weighted k-server problem --- p.27Chapter 3.1 --- Introduction --- p.27Chapter 3.2 --- Related Work --- p.29Chapter 3.3 --- Fiat and Ricklin's Algorithm --- p.29Chapter 3.4 --- The Work Function Algorithm --- p.32Chapter 3.5 --- The Competitive Analysis --- p.35Chapter 4 --- The Influence of Lookahead --- p.41Chapter 4.1 --- Introduction --- p.41Chapter 4.2 --- Related Work --- p.42Chapter 4.3 --- The Role of l-lookahead --- p.43Chapter 4.4 --- The LRU Algorithm with l-lookahead --- p.45Chapter 4.5 --- The Competitive Analysis --- p.45Chapter 5 --- Space Complexity --- p.57Chapter 5.1 --- Introduction --- p.57Chapter 5.2 --- Related Work --- p.59Chapter 5.3 --- Preliminaries --- p.59Chapter 5.4 --- The TWO Algorithm --- p.60Chapter 5.5 --- Competitive Analysis --- p.61Chapter 5.6 --- Remarks --- p.69Chapter 6 --- Conclusions --- p.70Chapter 6.1 --- Summary of Our Results --- p.70Chapter 6.2 --- Recent Results --- p.71Chapter 6.2.1 --- The Adversary Models --- p.71Chapter 6.2.2 --- On-line Performance-Improvement Algorithms --- p.73Chapter A --- Proof of Lemma1 --- p.75Bibliography --- p.7
    corecore