12 research outputs found
Self-organizing search lists using probabilistic back-pointers
A class of algorithms is given for maintaining self-organizing sequential search lists, where the only permutation applied is to move the accessed record of each search some distance towards the front of the list. During searches, these algorithms retain a back-pointer to a previously probed record in order to determine the destination of the accessed record's eventual move. The back-pointer does not traverse the list, but rather it is advanced occationally to point to the record just probed by the search algorithm. This avoids the cost of a second traversal through a significant portion of the list, which may be a significant savings when each record access may require a new page to be brought into primary memory. Probabilistic functions for deciding when to advance the pointer are presented and analyzed. These functions demonstrate average case complexities of measures such as asymptotic cost and convergence similar to some of the more common list update algorithms in the literature. In cases where the accessed record is moved forward a distance proportional to the distance to the front of the list, the use of these functions may save up to 50% of the time required for permuting the list
Optimal Lower Bounds for Projective List Update Algorithms
The list update problem is a classical online problem, with an optimal
competitive ratio that is still open, known to be somewhere between 1.5 and
1.6. An algorithm with competitive ratio 1.6, the smallest known to date, is
COMB, a randomized combination of BIT and the TIMESTAMP algorithm TS. This and
almost all other list update algorithms, like MTF, are projective in the sense
that they can be defined by looking only at any pair of list items at a time.
Projectivity (also known as "list factoring") simplifies both the description
of the algorithm and its analysis, and so far seems to be the only way to
define a good online algorithm for lists of arbitrary length. In this paper we
characterize all projective list update algorithms and show that their
competitive ratio is never smaller than 1.6 in the partial cost model.
Therefore, COMB is a best possible projective algorithm in this model.Comment: Version 3 same as version 2, but date in LaTeX \today macro replaced
by March 8, 201
On Resource Pooling and Separation for LRU Caching
Caching systems using the Least Recently Used (LRU) principle have now become
ubiquitous. A fundamental question for these systems is whether the cache space
should be pooled together or divided to serve multiple flows of data item
requests in order to minimize the miss probabilities. In this paper, we show
that there is no straight yes or no answer to this question, depending on
complex combinations of critical factors, including, e.g., request rates,
overlapped data items across different request flows, data item popularities
and their sizes. Specifically, we characterize the asymptotic miss
probabilities for multiple competing request flows under resource pooling and
separation for LRU caching when the cache size is large.
Analytically, we show that it is asymptotically optimal to jointly serve
multiple flows if their data item sizes and popularity distributions are
similar and their arrival rates do not differ significantly; the
self-organizing property of LRU caching automatically optimizes the resource
allocation among them asymptotically. Otherwise, separating these flows could
be better, e.g., when data sizes vary significantly. We also quantify critical
points beyond which resource pooling is better than separation for each of the
flows when the overlapped data items exceed certain levels. Technically, we
generalize existing results on the asymptotic miss probability of LRU caching
for a broad class of heavy-tailed distributions and extend them to multiple
competing flows with varying data item sizes, which also validates the Che
approximation under certain conditions. These results provide new insights on
improving the performance of caching systems
Optimal trading strategies vs. a statistical adversary
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.Includes bibliographical references (leaves 47-50).by Andrew Chou.M.S
On utilizing an enhanced object partitioning scheme to optimize self-organizing lists-on-lists
Author's accepted manuscript.This is a post-peer-review, pre-copyedit version of an article published in Evolving Systems. The final authenticated version is available online at: http://dx.doi.org/10.1007/s12530-020-09327-4.acceptedVersio
Usage-Dependent Information Systems Design.
Usage-dependent phenomenon has been commonly observed in computer information systems (CIS). Since the performance of CIS greatly depends on the phenomenon, how to model it is an important CIS design issue. A usage process model (the Simon-Yule model) for modeling the usage--dependent phenomenon is proposed. The model is modified and successfully applied to the performance evaluation of self-organizing linear search heuristics. Analytical and empirical results indicate that the model provides a realistic performance evaluation of the heuristics and presents a solution to the research open problems which have been unsolved for more than two decades. Furthermore, the results lead to develop a self-organizing mechanism incorporating the usage process model for continuous speech recognition systems in the artificial intelligence arena