11,747 research outputs found
Self-organizing search lists using probabilistic back-pointers
A class of algorithms is given for maintaining self-organizing sequential search lists, where the only permutation applied is to move the accessed record of each search some distance towards the front of the list. During searches, these algorithms retain a back-pointer to a previously probed record in order to determine the destination of the accessed record's eventual move. The back-pointer does not traverse the list, but rather it is advanced occationally to point to the record just probed by the search algorithm. This avoids the cost of a second traversal through a significant portion of the list, which may be a significant savings when each record access may require a new page to be brought into primary memory. Probabilistic functions for deciding when to advance the pointer are presented and analyzed. These functions demonstrate average case complexities of measures such as asymptotic cost and convergence similar to some of the more common list update algorithms in the literature. In cases where the accessed record is moved forward a distance proportional to the distance to the front of the list, the use of these functions may save up to 50% of the time required for permuting the list
Self-organizing lists on the Xnet
The first parallel designs for implementing self-organizing lists on the Xnet interconnection network are presented. Self-organizing lists permute the order of list entries after an entry is accessed according to some update hueristic. The heuristic attempts to place frequently requested entries closer to the front of the list. This paper outlines Xnet systems for self-organizing lists under the move-to-front and transpose update heuristics. Our novel designs can be used to achieve high-speed lossless text compression
A New Proposed Cost Model for List Accessing Problem using Buffering
There are many existing well known cost models for the list accessing
problem. The standard cost model developed by Sleator and Tarjan is most widely
used. In this paper, we have made a comprehensive study of the existing cost
models and proposed a new cost model for the list accessing problem. In our
proposed cost model, for calculating the processing cost of request sequence
using a singly linked list, we consider the access cost, matching cost and
replacement cost. The cost of processing a request sequence is the sum of
access cost, matching cost and replacement cost. We have proposed a novel
method for processing the request sequence which does not consider the
rearrangement of the list and uses the concept of buffering, matching, look
ahead and flag bit.Comment: 05 Pages, 2 figure
On Resource Pooling and Separation for LRU Caching
Caching systems using the Least Recently Used (LRU) principle have now become
ubiquitous. A fundamental question for these systems is whether the cache space
should be pooled together or divided to serve multiple flows of data item
requests in order to minimize the miss probabilities. In this paper, we show
that there is no straight yes or no answer to this question, depending on
complex combinations of critical factors, including, e.g., request rates,
overlapped data items across different request flows, data item popularities
and their sizes. Specifically, we characterize the asymptotic miss
probabilities for multiple competing request flows under resource pooling and
separation for LRU caching when the cache size is large.
Analytically, we show that it is asymptotically optimal to jointly serve
multiple flows if their data item sizes and popularity distributions are
similar and their arrival rates do not differ significantly; the
self-organizing property of LRU caching automatically optimizes the resource
allocation among them asymptotically. Otherwise, separating these flows could
be better, e.g., when data sizes vary significantly. We also quantify critical
points beyond which resource pooling is better than separation for each of the
flows when the overlapped data items exceed certain levels. Technically, we
generalize existing results on the asymptotic miss probability of LRU caching
for a broad class of heavy-tailed distributions and extend them to multiple
competing flows with varying data item sizes, which also validates the Che
approximation under certain conditions. These results provide new insights on
improving the performance of caching systems
Simon's Bounded Rationality. Origins and use in economic theory
The paper aims to show how Simon's notion of bounded rationality should be interpreted in the light of its connection with artificial intelligence. This connection points out that bounded rationality is a highly structured concept, and sheds light on several implications of Simon's general views on rationality. Finally, offering three paradigmatic examples, the artic1e presents the view that recent approaches, which refer to Simon's heterodox theory, only partially accept the teachings of their inspirer, splitting bounded rationality from the context of artificl al intelligence.
- …