12 research outputs found
New Bounds for Randomized List Update in the Paid Exchange Model
We study the fundamental list update problem in the paid exchange model P^d. This cost model was introduced by Manasse, McGeoch and Sleator [M.S. Manasse et al., 1988] and Reingold, Westbrook and Sleator [N. Reingold et al., 1994]. Here the given list of items may only be rearranged using paid exchanges; each swap of two adjacent items in the list incurs a cost of d. Free exchanges of items are not allowed. The model is motivated by the fact that, when executing search operations on a data structure, key comparisons are less expensive than item swaps.
We develop a new randomized online algorithm that achieves an improved competitive ratio against oblivious adversaries. For large d, the competitiveness tends to 2.2442. Technically, the analysis of the algorithm relies on a new approach of partitioning request sequences and charging expected cost. Furthermore, we devise lower bounds on the competitiveness of randomized algorithms against oblivious adversaries. No such lower bounds were known before. Specifically, we prove that no randomized online algorithm can achieve a competitive ratio smaller than 2 in the partial cost model, where an access to the i-th item in the current list incurs a cost of i-1 rather than i. All algorithms proposed in the literature attain their competitiveness in the partial cost model. Furthermore, we show that no randomized online algorithm can achieve a competitive ratio smaller than 1.8654 in the standard full cost model. Again the lower bounds hold for large d
Optimal Lower Bounds for Projective List Update Algorithms
The list update problem is a classical online problem, with an optimal
competitive ratio that is still open, known to be somewhere between 1.5 and
1.6. An algorithm with competitive ratio 1.6, the smallest known to date, is
COMB, a randomized combination of BIT and the TIMESTAMP algorithm TS. This and
almost all other list update algorithms, like MTF, are projective in the sense
that they can be defined by looking only at any pair of list items at a time.
Projectivity (also known as "list factoring") simplifies both the description
of the algorithm and its analysis, and so far seems to be the only way to
define a good online algorithm for lists of arbitrary length. In this paper we
characterize all projective list update algorithms and show that their
competitive ratio is never smaller than 1.6 in the partial cost model.
Therefore, COMB is a best possible projective algorithm in this model.Comment: Version 3 same as version 2, but date in LaTeX \today macro replaced
by March 8, 201
List Update with Delays or Time Windows
We consider the problem of List Update, one of the most fundamental problems
in online algorithms. We are given a list of elements and requests for these
elements that arrive over time. Our goal is to serve these requests, at a cost
equivalent to their position in the list, with the option of moving them
towards the head of the list. Sleator and Tarjan introduced the famous "Move to
Front" algorithm (wherein any requested element is immediately moved to the
head of the list) and showed that it is 2-competitive. While this bound is
excellent, the absolute cost of the algorithm's solution may be very large
(e.g., requesting the last half elements of the list would result in a solution
cost that is quadratic in the length of the list). Thus, we consider the more
general problem wherein every request arrives with a deadline and must be
served, not immediately, but rather before the deadline. We further allow the
algorithm to serve multiple requests simultaneously. We denote this problem as
List Update with Time Windows. While this generalization benefits from lower
solution costs, it requires new types of algorithms. In particular, for the
simple example of requesting the last half elements of the list with
overlapping time windows, Move-to-Front fails. We show an O(1) competitive
algorithm. The algorithm is natural but the analysis is a bit complicated and a
novel potential function is required. Thereafter we consider the more general
problem of List Update with Delays in which the deadlines are replaced with
arbitrary delay functions. This problem includes as a special case the prize
collecting version in which a request might not be served (up to some deadline)
and instead suffers an arbitrary given penalty. Here we also establish an O(1)
competitive algorithm for general delays. The algorithm for the delay version
is more complex and its analysis is significantly more involved
Randomization can be as helpful as a glimpse of the future in online computation
We provide simple but surprisingly useful direct product theorems for proving
lower bounds on online algorithms with a limited amount of advice about the
future. As a consequence, we are able to translate decades of research on
randomized online algorithms to the advice complexity model. Doing so improves
significantly on the previous best advice complexity lower bounds for many
online problems, or provides the first known lower bounds. For example, if
is the number of requests, we show that:
(1) A paging algorithm needs bits of advice to achieve a
competitive ratio better than , where is the cache
size. Previously, it was only known that bits of advice were
necessary to achieve a constant competitive ratio smaller than .
(2) Every -competitive vertex coloring algorithm must
use bits of advice. Previously, it was only known that
bits of advice were necessary to be optimal.
For certain online problems, including the MTS, -server, paging, list
update, and dynamic binary search tree problem, our results imply that
randomization and sublinear advice are equally powerful (if the underlying
metric space or node set is finite). This means that several long-standing open
questions regarding randomized online algorithms can be equivalently stated as
questions regarding online algorithms with sublinear advice. For example, we
show that there exists a deterministic -competitive -server
algorithm with advice complexity if and only if there exists a
randomized -competitive -server algorithm without advice.
Technically, our main direct product theorem is obtained by extending an
information theoretical lower bound technique due to Emek, Fraigniaud, Korman,
and Ros\'en [ICALP'09]
Recommended from our members
Higher Compression from the Burrows-Wheeler Transform with New Algorithms for the List Update Problem
Burrows-Wheeler compression is a three stage process in which the data is transformed with the Burrows-Wheeler Transform, then transformed with Move-To-Front, and finally encoded with an entropy coder. Move-To-Front, Transpose, and Frequency Count are some of the many algorithms used on the List Update problem. In 1985, Competitive Analysis first showed the superiority of Move-To-Front over Transpose and Frequency Count for the List Update problem with arbitrary data. Earlier studies due to Bitner assumed independent identically distributed data, and showed that while Move-To-Front adapts to a distribution faster, incurring less overwork, the asymptotic costs of Frequency Count and Transpose are less. The improvements to Burrows-Wheeler compression this work covers are increases in the amount, not speed, of compression. Best x of 2x-1 is a new family of algorithms created to improve on Move-To-Front's processing of the output of the Burrows-Wheeler Transform which is like piecewise independent identically distributed data. Other algorithms for both the middle stage of Burrows-Wheeler compression and the List Update problem for which overwork, asymptotic cost, and competitive ratios are also analyzed are several variations of Move One From Front and part of the randomized algorithm Timestamp. The Best x of 2x - 1 family includes Move-To-Front, the part of Timestamp of interest, and Frequency Count. Lastly, a greedy choosing scheme, Snake, switches back and forth as the amount of compression that two List Update algorithms achieves fluctuates, to increase overall compression. The Burrows-Wheeler Transform is based on sorting of contexts. The other improvements are better sorting orders, such as “aeioubcdf...” instead of standard alphabetical “abcdefghi...” on English text data, and an algorithm for computing orders for any data, and Gray code sorting instead of standard sorting. Both techniques lessen the overwork incurred by whatever List Update algorithms are used by reducing the difference between adjacent sorted contexts