524 research outputs found

    Truly Online Paging with Locality of Reference

    Full text link
    The competitive analysis fails to model locality of reference in the online paging problem. To deal with it, Borodin et. al. introduced the access graph model, which attempts to capture the locality of reference. However, the access graph model has a number of troubling aspects. The access graph has to be known in advance to the paging algorithm and the memory required to represent the access graph itself may be very large. In this paper we present truly online strongly competitive paging algorithms in the access graph model that do not have any prior information on the access sequence. We present both deterministic and randomized algorithms. The algorithms need only O(k log n) bits of memory, where k is the number of page slots available and n is the size of the virtual address space. I.e., asymptotically no more memory than needed to store the virtual address translation table. We also observe that our algorithms adapt themselves to temporal changes in the locality of reference. We model temporal changes in the locality of reference by extending the access graph model to the so called extended access graph model, in which many vertices of the graph can correspond to the same virtual page. We define a measure for the rate of change in the locality of reference in G denoted by Delta(G). We then show our algorithms remain strongly competitive as long as Delta(G) >= (1+ epsilon)k, and no truly online algorithm can be strongly competitive on a class of extended access graphs that includes all graphs G with Delta(G) >= k- o(k).Comment: 37 pages. Preliminary version appeared in FOCS '9

    Simple optimality proofs for Least Recently Used in the presence of locality of reference

    Get PDF
    It is well known that competitive analysis yields results that do not reflect the observed performance of online paging algorithms. Many deterministic paging algorithms achieve the same competitive ratio, ranging from inefficient strategies as flush-when-full to the well-performing least-recently-used (LRU). In this paper, we study this fundamental online problem from the viewpoint of stochastic dominance. We give simple proofs that whensequences are drawn from distributions modelling locality of reference, LRU stochastically dominates any other online paging algorithm. As a byproduct, we obtain simple proofs of some earlier results.operations research and management science;

    The K-Server Dual and Loose Competitiveness for Paging

    Full text link
    This paper has two results. The first is based on the surprising observation that the well-known ``least-recently-used'' paging algorithm and the ``balance'' algorithm for weighted caching are linear-programming primal-dual algorithms. This observation leads to a strategy (called ``Greedy-Dual'') that generalizes them both and has an optimal performance guarantee for weighted caching. For the second result, the paper presents empirical studies of paging algorithms, documenting that in practice, on ``typical'' cache sizes and sequences, the performance of paging strategies are much better than their worst-case analyses in the standard model suggest. The paper then presents theoretical results that support and explain this. For example: on any input sequence, with almost all cache sizes, either the performance guarantee of least-recently-used is O(log k) or the fault rate (in an absolute sense) is insignificant. Both of these results are strengthened and generalized in``On-line File Caching'' (1998).Comment: conference version: "On-Line Caching as Cache Size Varies", SODA (1991

    Stochastic k-Server: How Should Uber Work?

    Get PDF
    In this paper we study a stochastic variant of the celebrated kk-server problem. In the k-server problem, we are required to minimize the total movement of k servers that are serving an online sequence of tt requests in a metric. In the stochastic setting we are given t independent distributions in advance, and at every time step i a request is drawn from P_i. Designing the optimal online algorithm in such setting is NP-hard, therefore the emphasis of our work is on designing an approximately optimal online algorithm. We first show a structural characterization for a certain class of non-adaptive online algorithms. We prove that in general metrics, the best of such algorithms has a cost of no worse than three times that of the optimal online algorithm. Next, we present an integer program that finds the optimal algorithm of this class for any arbitrary metric. Finally by rounding the solution of the linear relaxation of this program, we present an online algorithm for the stochastic k-server problem with an approximation factor of 33 in the line and circle metrics and factor of O(log n) in general metrics. In this way, we achieve an approximation factor that is independent of k, the number of servers. Moreover, we define the Uber problem, motivated by extraordinary growth of online network transportation services. In the Uber problem, each demand consists of two points -a source and a destination- in the metric. Serving a demand is to move a server to its source and then to its destination. The objective is again minimizing the total movement of the k given servers. It is not hard to show that given an alpha-approximation algorithm for the k-server problem, we can obtain a max{3,alpha}-approximation algorithm for the Uber problem. Motivated by the fact that demands are usually highly correlated with the time (e.g. what day of the week or what time of the day the demand is arrived), we study the stochastic Uber problem. Using our results for stochastic k-server we can obtain a 3-approximation algorithm for the stochastic Uber problem in line and circle metrics, and a O(log n)-approximation algorithm for a general metric of size n. Furthermore, we extend our results to the correlated setting where the probability of a request arriving at a certain point depends not only on the time step but also on the previously arrived requests

    New Bounds for Randomized List Update in the Paid Exchange Model

    Get PDF
    We study the fundamental list update problem in the paid exchange model P^d. This cost model was introduced by Manasse, McGeoch and Sleator [M.S. Manasse et al., 1988] and Reingold, Westbrook and Sleator [N. Reingold et al., 1994]. Here the given list of items may only be rearranged using paid exchanges; each swap of two adjacent items in the list incurs a cost of d. Free exchanges of items are not allowed. The model is motivated by the fact that, when executing search operations on a data structure, key comparisons are less expensive than item swaps. We develop a new randomized online algorithm that achieves an improved competitive ratio against oblivious adversaries. For large d, the competitiveness tends to 2.2442. Technically, the analysis of the algorithm relies on a new approach of partitioning request sequences and charging expected cost. Furthermore, we devise lower bounds on the competitiveness of randomized algorithms against oblivious adversaries. No such lower bounds were known before. Specifically, we prove that no randomized online algorithm can achieve a competitive ratio smaller than 2 in the partial cost model, where an access to the i-th item in the current list incurs a cost of i-1 rather than i. All algorithms proposed in the literature attain their competitiveness in the partial cost model. Furthermore, we show that no randomized online algorithm can achieve a competitive ratio smaller than 1.8654 in the standard full cost model. Again the lower bounds hold for large d

    Learning-Augmented Weighted Paging

    Full text link
    We consider a natural semi-online model for weighted paging, where at any time the algorithm is given predictions, possibly with errors, about the next arrival of each page. The model is inspired by Belady's classic optimal offline algorithm for unweighted paging, and extends the recently studied model for learning-augmented paging (Lykouris and Vassilvitskii, 2018) to the weighted setting. For the case of perfect predictions, we provide an \ell-competitive deterministic and an O(log)O(\log \ell)-competitive randomized algorithm, where \ell is the number of distinct weight classes. Both these bounds are tight, and imply an O(logW)O(\log W)- and O(loglogW)O(\log \log W)-competitive ratio, respectively, when the page weights lie between 11 and WW. Previously, it was not known how to use these predictions in the weighted setting and only bounds of kk and O(logk)O(\log k) were known, where kk is the cache size. Our results also generalize to the interleaved paging setting and to the case of imperfect predictions, with the competitive ratios degrading smoothly from O()O(\ell) and O(log)O(\log \ell) to O(k)O(k) and O(logk)O(\log k), respectively, as the prediction error increases. Our results are based on several insights on structural properties of Belady's algorithm and the sequence of page arrival predictions, and novel potential functions that incorporate these predictions. For the case of unweighted paging, the results imply a very simple potential function based proof of the optimality of Belady's algorithm, which may be of independent interest

    St. Mary’s Episcopal Church: Architectural History and Preservation Possibilities

    Get PDF
    This thesis focuses on the architectural and historical significance of St. Mary’s Episcopal Church in Portsmouth, Rhode Island. It comprises two major sections: a historical narrative and a research narrative. Thus, it is meant to illuminate the history of St. Mary’s and to guide future research. The historical narrative contains information regarding the context into which the church was built, the founding of the parish, the construction of the church building, and selected significant changes. The research narrative contains a list of archives consulted, suggestions of uses for the information obtained, and a description of the necessary steps to list St. Mary’s in the National Register of Historic Places

    VIRTUAL MEMORY ON A MANY-CORE NOC

    Get PDF
    Many-core devices are likely to become increasingly common in real-time and embedded systems as computational demands grow and as expectations for higher performance can generally only be met by by increasing core numbers rather than relying on higher clock speeds. Network-on-chip devices, where multiple cores share a single slice of silicon and employ packetised communications, are a widely-deployed many-core option for system designers. As NoCs are expected to run larger and more complex programs, the small amount of fast, on-chip memory available to each core is unlikely to be sufficient for all but the simplest of tasks, and it is necessary to find an efficient, effective, and time-bounded, means of accessing resources stored in off-chip memory, such as DRAM or Flash storage. The abstraction of paged virtual memory is a familiar technique to manage similar tasks in general computing but has often been shunned by real-time developers because of concern about time predictability. We show it can be a poor choice for a many-core NoC system as, unmodified, it typically uses page sizes optimised for interaction with spinning disks and not solid state media, and transports significant volumes of subsequently unused data across already congested links. In this work we outline and simulate an efficient partial paging algorithm where only those memory resources that are locally accessed are transported between global and local storage. We further show that smaller page sizes add to efficiency. We examine the factors that lead to timing delays in such systems, and show we can predict worst case execution times at even safety-critical thresholds by using statistical methods from extreme value theory. We also show these results are applicable to systems with a variety of connections to memory
    corecore