15,728 research outputs found
Optimal Lower Bounds for Projective List Update Algorithms
The list update problem is a classical online problem, with an optimal
competitive ratio that is still open, known to be somewhere between 1.5 and
1.6. An algorithm with competitive ratio 1.6, the smallest known to date, is
COMB, a randomized combination of BIT and the TIMESTAMP algorithm TS. This and
almost all other list update algorithms, like MTF, are projective in the sense
that they can be defined by looking only at any pair of list items at a time.
Projectivity (also known as "list factoring") simplifies both the description
of the algorithm and its analysis, and so far seems to be the only way to
define a good online algorithm for lists of arbitrary length. In this paper we
characterize all projective list update algorithms and show that their
competitive ratio is never smaller than 1.6 in the partial cost model.
Therefore, COMB is a best possible projective algorithm in this model.Comment: Version 3 same as version 2, but date in LaTeX \today macro replaced
by March 8, 201
New Bounds for Randomized List Update in the Paid Exchange Model
We study the fundamental list update problem in the paid exchange model P^d. This cost model was introduced by Manasse, McGeoch and Sleator [M.S. Manasse et al., 1988] and Reingold, Westbrook and Sleator [N. Reingold et al., 1994]. Here the given list of items may only be rearranged using paid exchanges; each swap of two adjacent items in the list incurs a cost of d. Free exchanges of items are not allowed. The model is motivated by the fact that, when executing search operations on a data structure, key comparisons are less expensive than item swaps.
We develop a new randomized online algorithm that achieves an improved competitive ratio against oblivious adversaries. For large d, the competitiveness tends to 2.2442. Technically, the analysis of the algorithm relies on a new approach of partitioning request sequences and charging expected cost. Furthermore, we devise lower bounds on the competitiveness of randomized algorithms against oblivious adversaries. No such lower bounds were known before. Specifically, we prove that no randomized online algorithm can achieve a competitive ratio smaller than 2 in the partial cost model, where an access to the i-th item in the current list incurs a cost of i-1 rather than i. All algorithms proposed in the literature attain their competitiveness in the partial cost model. Furthermore, we show that no randomized online algorithm can achieve a competitive ratio smaller than 1.8654 in the standard full cost model. Again the lower bounds hold for large d
Revisiting the COUNTER Algorithms for List Update
COUNTER algorithms, a family of randomized algorithms for the list update problem, were introduced by Reingold, Westbrook and Sleator [7]. They showed that for any>0, there exist COUNTER algorithms that achieve a competitive ratio of p 3+. In this paper we use a mixture of two COUNTER algorithms to achieve a competitiveness of 12=7, which is less than p 3. Furthermore, we demonstrate that it is impossible to prove a competitive ratio smaller than 12=7 for any mixture of COUNTER algorithms using the typeofpotential function argument that has been used so far. We also provide new lower bounds for the competitiveness of COUNTER algorithms in the standard cost model, including a 1.625 lower bound for the variant BIT and a matching 12/7 lower bound for our algorithm.
Recommended from our members
Higher Compression from the Burrows-Wheeler Transform with New Algorithms for the List Update Problem
Burrows-Wheeler compression is a three stage process in which the data is transformed with the Burrows-Wheeler Transform, then transformed with Move-To-Front, and finally encoded with an entropy coder. Move-To-Front, Transpose, and Frequency Count are some of the many algorithms used on the List Update problem. In 1985, Competitive Analysis first showed the superiority of Move-To-Front over Transpose and Frequency Count for the List Update problem with arbitrary data. Earlier studies due to Bitner assumed independent identically distributed data, and showed that while Move-To-Front adapts to a distribution faster, incurring less overwork, the asymptotic costs of Frequency Count and Transpose are less. The improvements to Burrows-Wheeler compression this work covers are increases in the amount, not speed, of compression. Best x of 2x-1 is a new family of algorithms created to improve on Move-To-Front's processing of the output of the Burrows-Wheeler Transform which is like piecewise independent identically distributed data. Other algorithms for both the middle stage of Burrows-Wheeler compression and the List Update problem for which overwork, asymptotic cost, and competitive ratios are also analyzed are several variations of Move One From Front and part of the randomized algorithm Timestamp. The Best x of 2x - 1 family includes Move-To-Front, the part of Timestamp of interest, and Frequency Count. Lastly, a greedy choosing scheme, Snake, switches back and forth as the amount of compression that two List Update algorithms achieves fluctuates, to increase overall compression. The Burrows-Wheeler Transform is based on sorting of contexts. The other improvements are better sorting orders, such as âaeioubcdf...â instead of standard alphabetical âabcdefghi...â on English text data, and an algorithm for computing orders for any data, and Gray code sorting instead of standard sorting. Both techniques lessen the overwork incurred by whatever List Update algorithms are used by reducing the difference between adjacent sorted contexts
Randomization can be as helpful as a glimpse of the future in online computation
We provide simple but surprisingly useful direct product theorems for proving
lower bounds on online algorithms with a limited amount of advice about the
future. As a consequence, we are able to translate decades of research on
randomized online algorithms to the advice complexity model. Doing so improves
significantly on the previous best advice complexity lower bounds for many
online problems, or provides the first known lower bounds. For example, if
is the number of requests, we show that:
(1) A paging algorithm needs bits of advice to achieve a
competitive ratio better than , where is the cache
size. Previously, it was only known that bits of advice were
necessary to achieve a constant competitive ratio smaller than .
(2) Every -competitive vertex coloring algorithm must
use bits of advice. Previously, it was only known that
bits of advice were necessary to be optimal.
For certain online problems, including the MTS, -server, paging, list
update, and dynamic binary search tree problem, our results imply that
randomization and sublinear advice are equally powerful (if the underlying
metric space or node set is finite). This means that several long-standing open
questions regarding randomized online algorithms can be equivalently stated as
questions regarding online algorithms with sublinear advice. For example, we
show that there exists a deterministic -competitive -server
algorithm with advice complexity if and only if there exists a
randomized -competitive -server algorithm without advice.
Technically, our main direct product theorem is obtained by extending an
information theoretical lower bound technique due to Emek, Fraigniaud, Korman,
and Ros\'en [ICALP'09]
Online Computation with Untrusted Advice
The advice model of online computation captures a setting in which the
algorithm is given some partial information concerning the request sequence.
This paradigm allows to establish tradeoffs between the amount of this
additional information and the performance of the online algorithm. However, if
the advice is corrupt or, worse, if it comes from a malicious source, the
algorithm may perform poorly. In this work, we study online computation in a
setting in which the advice is provided by an untrusted source. Our objective
is to quantify the impact of untrusted advice so as to design and analyze
online algorithms that are robust and perform well even when the advice is
generated in a malicious, adversarial manner. To this end, we focus on
well-studied online problems such as ski rental, online bidding, bin packing,
and list update. For ski-rental and online bidding, we show how to obtain
algorithms that are Pareto-optimal with respect to the competitive ratios
achieved; this improves upon the framework of Purohit et al. [NeurIPS 2018] in
which Pareto-optimality is not necessarily guaranteed. For bin packing and list
update, we give online algorithms with worst-case tradeoffs in their
competitiveness, depending on whether the advice is trusted or not; this is
motivated by work of Lykouris and Vassilvitskii [ICML 2018] on the paging
problem, but in which the competitiveness depends on the reliability of the
advice. Furthermore, we demonstrate how to prove lower bounds, within this
model, on the tradeoff between the number of advice bits and the
competitiveness of any online algorithm. Last, we study the effect of
randomization: here we show that for ski-rental there is a randomized algorithm
that Pareto-dominates any deterministic algorithm with advice of any size. We
also show that a single random bit is not always inferior to a single advice
bit, as it happens in the standard model
Alternative Measures for the Analysis of Online Algorithms
In this thesis we introduce and evaluate several new models for the analysis of online algorithms. In an online problem, the algorithm does not know the entire input from the beginning; the input is revealed in a sequence of steps. At each step the algorithm should make its decisions based on the past and without any knowledge about the future. Many important real-life problems such as paging and routing are intrinsically online and thus the design and analysis of
online algorithms is one of the main research areas in theoretical computer science.
Competitive analysis is the standard measure for analysis of online algorithms. It has been applied to many online problems in diverse areas ranging from robot navigation, to network routing, to scheduling, to online graph coloring. While in several instances competitive analysis gives satisfactory results, for certain problems it results in unrealistically pessimistic ratios and/or
fails to distinguish between algorithms that have vastly differing performance under any practical characterization. Addressing these shortcomings has been the subject of intense research by many of the best minds in the field. In this thesis, building upon recent advances of others we introduce some new models for analysis of online algorithms, namely Bijective Analysis, Average Analysis,
Parameterized Analysis, and Relative Interval Analysis. We show that they lead to good results when applied to paging and list update algorithms. Paging and list update are two well known online problems. Paging is one of the main examples of poor behavior of competitive analysis. We show that LRU is the unique optimal online paging algorithm according to Average Analysis on sequences with locality of reference. Recall that in practice input sequences for paging have
high locality of reference. It has been empirically long established that LRU is the best paging algorithm. Yet, Average Analysis is the first model that gives strict separation of LRU from all other online paging algorithms, thus solving a long standing open problem. We prove a similar
result for the optimality of MTF for list update on sequences with locality of reference.
A technique for the analysis of online algorithms has to be effective to be useful in day-to-day analysis of algorithms. While Bijective and Average Analysis succeed at providing fine separation, their application can be, at times, cumbersome. Thus we apply a parameterized or adaptive analysis framework to online algorithms. We show that this framework is effective, can be applied more easily to a larger family of problems and leads to finer analysis than the competitive ratio. The conceptual innovation of parameterizing the performance of an algorithm by something other than the input size was first introduced over three decades ago [124, 125]. By now it has been extensively studied and understood in the context of adaptive analysis (for problems in P) and parameterized algorithms (for NP-hard problems), yet to our knowledge
this thesis is the first systematic application of this technique to the study of online algorithms. Interestingly, competitive analysis can be recast as a particular form of parameterized analysis in
which the performance of opt is the parameter. In general, for each problem we can choose the parameter/measure that best reflects the difficulty of the input. We show that in many instances the performance of opt on a sequence is a coarse approximation of the difficulty or complexity
of a given input sequence. Using a finer, more natural measure we can separate paging and list update algorithms which were otherwise indistinguishable under the classical model. This creates a performance hierarchy of algorithms which better reflects the intuitive relative strengths between them. Lastly, we show that, surprisingly, certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results. We test list update algorithms in the context of a data compression problem known to have locality of reference. Our experiments show MTF outperforms other list update algorithms
in practice after BWT. This is consistent with the intuition that BWT increases locality of reference
Online Computation with Untrusted Advice
The advice model of online computation captures the setting in which the online algorithm is given some partial information concerning the request sequence. This paradigm allows to establish tradeoffs between the amount of this additional information and the performance of the online algorithm. However, unlike real life in which advice is a recommendation that we can choose to follow or to ignore based on trustworthiness, in the current advice model, the online algorithm treats it as infallible. This means that if the advice is corrupt or, worse, if it comes from a malicious source, the algorithm may perform poorly. In this work, we study online computation in a setting in which the advice is provided by an untrusted source. Our objective is to quantify the impact of untrusted advice so as to design and analyze online algorithms that are robust and perform well even when the advice is generated in a malicious, adversarial manner. To this end, we focus on well- studied online problems such as ski rental, online bidding, bin packing, and list update. For ski-rental and online bidding, we show how to obtain algorithms that are Pareto-optimal with respect to the competitive ratios achieved; this improves upon the framework of Purohit et al. [NeurIPS 2018] in which Pareto-optimality is not necessarily guaranteed. For bin packing and list update, we give online algorithms with worst-case tradeoffs in their competitiveness, depending on whether the advice is trusted or not; this is motivated by work of Lykouris and Vassilvitskii [ICML 2018] on the paging problem, but in which the competitiveness depends on the reliability of the advice. Furthermore, we demonstrate how to prove lower bounds, within this model, on the tradeoff between the number of advice bits and the competitiveness of any online algorithm. Last, we study the effect of randomization: here we show that for ski-rental there is a randomized algorithm that Pareto-dominates any deterministic algorithm with advice of any size. We also show that a single random bit is not always inferior to a single advice bit, as it happens in the standard model
- âŠ