3,535 research outputs found
The Frequent Items Problem in Online Streaming under Various Performance Measures
In this paper, we strengthen the competitive analysis results obtained for a
fundamental online streaming problem, the Frequent Items Problem. Additionally,
we contribute with a more detailed analysis of this problem, using alternative
performance measures, supplementing the insight gained from competitive
analysis. The results also contribute to the general study of performance
measures for online algorithms. It has long been known that competitive
analysis suffers from drawbacks in certain situations, and many alternative
measures have been proposed. However, more systematic comparative studies of
performance measures have been initiated recently, and we continue this work,
using competitive analysis, relative interval analysis, and relative worst
order analysis on the Frequent Items Problem.Comment: IMADA-preprint-c
Online Bin Covering: Expectations vs. Guarantees
Bin covering is a dual version of classic bin packing. Thus, the goal is to
cover as many bins as possible, where covering a bin means packing items of
total size at least one in the bin.
For online bin covering, competitive analysis fails to distinguish between
most algorithms of interest; all "reasonable" algorithms have a competitive
ratio of 1/2. Thus, in order to get a better understanding of the combinatorial
difficulties in solving this problem, we turn to other performance measures,
namely relative worst order, random order, and max/max analysis, as well as
analyzing input with restricted or uniformly distributed item sizes. In this
way, our study also supplements the ongoing systematic studies of the relative
strengths of various performance measures.
Two classic algorithms for online bin packing that have natural dual versions
are Harmonic and Next-Fit. Even though the algorithms are quite different in
nature, the dual versions are not separated by competitive analysis. We make
the case that when guarantees are needed, even under restricted input
sequences, dual Harmonic is preferable. In addition, we establish quite robust
theoretical results showing that if items come from a uniform distribution or
even if just the ordering of items is uniformly random, then dual Next-Fit is
the right choice.Comment: IMADA-preprint-c
Probabilistic alternatives for competitive analysis
In the last 20 years competitive analysis has become the main tool for analyzing the quality of online algorithms. Despite of this, competitive analysis has also been criticized: it sometimes cannot discriminate between algorithms that exhibit significantly different empirical behavior or it even favors an algorithm that is worse from an empirical point of view. Therefore, there have been several approaches to circumvent these drawbacks. In this survey, we discuss probabilistic alternatives for competitive analysis.operations research and management science;
Simple optimality proofs for Least Recently Used in the presence of locality of reference
It is well known that competitive analysis yields results that do not reflect the observed performance of online paging algorithms. Many deterministic paging algorithms achieve the same competitive ratio, ranging from inefficient strategies as flush-when-full to the well-performing least-recently-used (LRU). In this paper, we study this fundamental online problem from the viewpoint of stochastic dominance. We give simple proofs that whensequences are drawn from distributions modelling locality of reference, LRU stochastically dominates any other online paging algorithm. As a byproduct, we obtain simple proofs of some earlier results.operations research and management science;
FIFO anomaly is unbounded
Virtual memory of computers is usually implemented by demand paging. For some
page replacement algorithms the number of page faults may increase as the
number of page frames increases. Belady, Nelson and Shedler constructed
reference strings for which page replacement algorithm FIFO produces near twice
more page faults in a larger memory than in a smaller one. They formulated the
conjecture that 2 is a general bound. We prove that this ratio can be
arbitrarily large
Adaptive Analysis of On-line Algorithms
On-line algorithms are usually analyzed using competitive analysis, in which the performance
of on-line algorithm on a sequence is normalized by the performance of the optimal on-line
algorithm on that sequence. In this paper we introduce adaptive/cooperative analysis as an
alternative general framework for the analysis of on-line algorithms. This model gives promising
results when applied to two well known on-line problems, paging and list update. The idea is
to normalize the performance of an on-line algorithm by a measure other than the performance
of the on-line optimal algorithm OPT. We show that in many instances the perform of OPT
on a sequence is a coarse approximation of the difficulty or complexity of a given input. Using
a finer, more natural measure we can separate paging and list update algorithms which were
otherwise undistinguishable under the classical model. This createas a performance hierarchy of
algorithms which better reflects the intuitive relative strengths between them. Lastly, we show
that, surprisingly, certain randomized algorithms which are superior to MTF in the classical
model are not so in the adaptive case. This confirms that the ability of the on-line adaptive
algorithm to ignore pathological worst cases can lead to algorithms that are more efficient in
practice
- …