65 research outputs found

    A cache memory system based on a dynamic/adaptive replacement approach

    Get PDF
    En este trabajo, nosotros proponemos un sistema de memoria cache basado en un esquema de reemplazo adaptativo, el cual formaría parte del Sistema Manejador de la Memoria Virtual de un Sistema Operativo. Nosotros usamos un simulador de eventos discretos para comparar nuestro enfoque con trabajos previos. Nuestro esquema de reemplazo adaptativo esta basado en varias propiedades del sistema y de las aplicaciones, para estimar/escoger la mejor política de reemplazo. Nosotros definidos un valor de prioridad de reemplazo a cada bloque de la memoria cache, según el conjunto de propiedades del sistema y de las aplicaciones, para seleccionar cual bloque eliminar. El objetivo es proveer un uso efectivo de la memoria cache y un buen rendimiento para las aplicaciones.Palabras Claves: Sistema de Manejo de Memoria, Memoria Cache, Evaluación de Rendimiento

    GreedyDual-Join: Locality-Aware Buffer Management for Approximate Join Processing Over Data Streams

    Full text link
    We investigate adaptive buffer management techniques for approximate evaluation of sliding window joins over multiple data streams. In many applications, data stream processing systems have limited memory or have to deal with very high speed data streams. In both cases, computing the exact results of joins between these streams may not be feasible, mainly because the buffers used to compute the joins contain much smaller number of tuples than the tuples contained in the sliding windows. Therefore, a stream buffer management policy is needed in that case. We show that the buffer replacement policy is an important determinant of the quality of the produced results. To that end, we propose GreedyDual-Join (GDJ) an adaptive and locality-aware buffering technique for managing these buffers. GDJ exploits the temporal correlations (at both long and short time scales), which we found to be prevalent in many real data streams. We note that our algorithm is readily applicable to multiple data streams and multiple joins and requires almost no additional system resources. We report results of an experimental study using both synthetic and real-world data sets. Our results demonstrate the superiority and flexibility of our approach when contrasted to other recently proposed techniques

    Cliffhanger: Scaling Performance Cliffs in Web Memory Caches

    Get PDF
    Web-scale applications are heavily reliant on memory cache systems such as Memcached to improve throughput and reduce user latency. Small performance improvements in these systems can result in large end-to-end gains. For example, a marginal increase in hit rate of 1% can reduce the application layer latency by over 35%. However, existing web cache resource allocation policies are workload oblivious and first-come-first-serve. By analyzing measurements from a widely used caching service, Memcachier, we demonstrate that existing cache allocation techniques leave significant room for improvement. We develop Cliffhanger, a lightweight iterative algorithm that runs on memory cache servers, which incrementally optimizes the resource allocations across and within applications based on dynamically changing workloads. It has been shown that cache allocation algorithms underperform when there are performance cliffs, in which minor changes in cache allocation cause large changes in the hit rate. We design a novel technique for dealing with performance cliffs incrementally and locally. We demonstrate that for the Memcachier applications, on average, Cliffhanger increases the overall hit rate 1.2%, reduces the total number of cache misses by 36.7% and achieves the same hit rate with 45% less memory capacity

    On the Existence of a Spectrum of Policies that Subsumes the Least Recently Used (LRU) and Least Frequently Used (LFU) Policies

    No full text
    We show that there exists a spectrum of block replacement policies that subsumes both the Least Recently Used (LRU) and the Least Frequently Used (LFU) policies. The spectrum is formed according to how much more weight we give to the recent history than to the older history, and is referred to as the LRFU (Least Recently/Frequently Used) policy. Unlike many previous policies that use limited history to make block replacement decisions, the LRFU policy uses the complete reference history of blocks recorded during their cache residency. Nevertheless, the LRFU requires only a few words for each block to maintain such history. This paper also describes an implementation of the LRFU that again subsumes the LRU and LFU implementations. The LRFU policy is applied to buffer caching, and results from trace-driven simulations show that the LRFU performs better than previously known policies for the workloads we considered. This point is reinforced by results from our integration of the LRFU into..

    Paging with Dynamic Memory Capacity

    Get PDF
    We study a generalization of the classic paging problem that allows the amount of available memory to vary over time - capturing a fundamental property of many modern computing realities, from cloud computing to multi-core and energy-optimized processors. It turns out that good performance in the "classic" case provides no performance guarantees when memory capacity fluctuates: roughly speaking, moving from static to dynamic capacity can mean the difference between optimality within a factor 2 in space and time, and suboptimality by an arbitrarily large factor. More precisely, adopting the competitive analysis framework, we show that some online paging algorithms, despite having an optimal (h,k)-competitive ratio when capacity remains constant, are not (3,k)-competitive for any arbitrarily large k in the presence of minimal capacity fluctuations. In this light it is surprising that several classic paging algorithms perform remarkably well even if memory capacity changes adversarially - in fact, even without taking those changes into explicit account! In particular, we prove that LFD still achieves the minimum number of faults, and that several classic online algorithms such as LRU have a "dynamic" (h,k)-competitive ratio that is the best one can achieve without knowledge of future page requests, even if one had perfect knowledge of future capacity fluctuations. Thus, with careful management, knowing/predicting future memory resources appears far less crucial to performance than knowing/predicting future data accesses. We characterize the optimal "dynamic" (h,k)-competitive ratio exactly, and show it has a somewhat complex expression that is almost but not quite equal to the "classic" ratio k/(k-h+1), thus proving a strict if minuscule separation between online paging performance achievable in the presence or absence of capacity fluctuations
    corecore