11 research outputs found

    Exploiting inter-reference time characteristic of Web cache workload for web cache replacement design

    Get PDF
    Caching objects in Internet environment is aimed to reduce bandwidth consumption and increase the response time of system in term of user perception.Researcher believes that the performance of web cache is dependent on user access behavior.Therefore, many studies on workload of the internet web cache have been conducted.However, in the web cache environment, the study in inter-reference time of the successive requests is rarely.This paper tries to explore the characteristic of workload of the internet web cache, especially on the inter reference time (IRT).Based on the correlation test and trend analysis, it can be concluded that IRT is reasonable to be applied as a metric of web cache replacement policy

    Regularly expected reference-time as a metric of web cache replacement policy

    Get PDF
    The growth of Internet access was increasing significantly.In facts, more than one user access the same object so there is an opportunity to reduce this redundancy by placing an intermediate storage called cache.By this approach, the bandwidth consumption and response time of system in term of user perception can be improved. When the size of web cache is limited, it needs to manage the objects in web cache so that the hit ratio and byte hit ratio are maximized. Based on previous research the performance of cache replacement is dependent on the user/program access behavior.Therefore, the success of IRT implementation in memory cache replacement is not guaranteed a same result for web cache environment.Researcher has explored the regularity of user access and used this characteristic to be included in a metric of web cache replacement. Other researcher uses the regularity to predict the next occurrences and combine with past frequency occurrences.In predicting process, they use statistic or data mining approach.However, it takes time in computing prediction process. Therefore, this paper proposes a simple approach in predicting the next object reference.This approach is based on assumption that the object could be accessed by user regularly such DA-IRT that be used to calculate the time of next object reference called the regularly expected reference time (RERT).The object with longer RERT will be evicted sooner from the web cache.Based on experiment result, the performance of RERT is dependent on user access behavior and opposite of DA-IRT policy

    REGULARLY EXPECTED REFERENCE-TIME AS A METRIC OF WEB CACHE REPLACEMENT POLICY

    Get PDF
    ABSTRACT. The growth of Internet access was increasing significantly. In facts, more than one user access the same object so there is an opportunity to reduce this redundancy by placing an intermediate storage called cache. By this approach, the bandwidth consumption and response time of system in term of user perception can be improved. When the size of web cache is limited, it needs to manage the objects in web cache so that the hit ratio and byte hit ratio are maximized. Based on previous research the performance of cache replacement is dependent on the user/program access behavior. Therefore, the success of IRT implementation in memory cache replacement is not guaranteed a same result for web cache environment. Researcher has explored the regularity of user access and used this characteristic to be included in a metric of web cache replacement. Other researcher uses the regularity to predict the next occurrences and combine with past frequency occurrences. In predicting process, they use statistic or data mining approach. However, it takes time in computing prediction process. Therefore, this paper proposes a simple approach in predicting the next object reference. This approach is based on assumption that the object could be accessed by user regularly such DA-IRT that be used to calculate the time of next object reference called the regularly expected reference time (RERT). The object with longer RERT will be evicted sooner from the web cache. Based on experiment result, the performance of RERT is dependent on user access behavior and opposite of DA-IRT policy

    Modeling strength of locality of reference via notions of positive dependence

    Get PDF
    The performance of demand-driven caching depends on the locality of reference exhibited by the stream of requests made to the cache. In spite of numerous efforts, no consensus has been reached on how to formally {em compare} streams of requests on the basis of their locality of reference. We take on this issue by introducing the notion of Temporal Correlations (TC) ordering for comparing strength of temporal correlations in streams of requests. This notion is based on the supermodular ordering, a concept of positive dependence which has been successfully used for comparing dependence structures in sequences of rvs. We explore how the TC ordering captures the strength of temporal correlations in several Web request models, namely, the higher-order Markov chain model (HOMM), the partial Markov chain model (PMM) and the Least-Recently-Used stack model (LRUSM). We establish a folk theorem to the effect that the stronger the temporal correlations, the smaller the miss rate for the PMM. Conjectures and simulations are offered as to when this folk theorem should hold under the HOMM and under the LRUSM. Lastly, we investigate the validity this folk theorem for general input streams under the Working Set algorithm

    An Inter-Reference Gap Model for Temporal Locality in Program Behavior

    No full text
    The locality of reference in program behavior has been studied and modeled extensively because of its application to memory design, code optimization, multiprogramming etc. We propose a k-order Markov chain based scheme to model the sequence of time intervals between successive references to the same address in memory during program execution. Each unique address in a program is modeled separately. To validate our model, which we call the Interference Gap (IRG) model, we show substantial improvements in three different areas where it is applied. (1) We improve upon the miss ratio for the Least Recently Used (LRU) memory replacement algorithm by up to 37%. (2) We achieve up to 22% space-time product improvement over the Working Set (WS) algorithm for dynamic memory management. (3) A new trace compression technique is proposed which compresses up to 2.5% with zero error in WS simulations and up to 3.7% error in the LRU simulations. All these results are obtained experimentally, via trace-driven simulations over a wide range of cache traces, page reference traces, object traces and database traces.Technical report DCS-TR-31

    Efficient caching algorithms for memory management in computer systems

    Get PDF
    As disk performance continues to lag behind that of memory systems and processors, fully utilizing memory to reduce disk accesses is a highly effective effort to improve the entire system performance. Furthermore, to serve the applications running on a computer in distributed systems, not only the local memory but also the memory on remote servers must be effectively managed to minimize I/O operations. The critical challenges in an effective memory cache management include: (1) Insightfully understanding and quantifying the locality inherent in the memory access requests; (2) Effectively utilizing the locality information in replacement algorithms; (3) Intelligently placing and replacing data in the multi-level caches of a distributed system; (4) Ensuring that the overheads of the proposed schemes are acceptable.;This dissertation provides solutions and makes unique and novel contributions in application locality quantification, general replacement algorithms, low-cost replacement policy, thrashing protection, as well as multi-level cache management in a distributed system. First, the dissertation proposes a new method to quantify locality strength, and accurately to identify the data with strong locality. It also provides a new replacement algorithm, which significantly outperforms existing algorithms. Second, considering the extremely low-cost requirements on replacement policies in virtual memory management, the dissertation proposes a policy meeting the requirements, and considerably exceeding the performance existing policies. Third, the dissertation provides an effective scheme to protect the system from thrashing for running memory-intensive applications. Finally, the dissertation provides a multi-level block placement and replacement protocol in a distributed client-server environment, exploiting non-uniform locality strengths in the I/O access requests.;The methodology used in this study include careful application behavior characterization, system requirement analysis, algorithm designs, trace-driven simulation, and system implementations. A main conclusion of the work is that there is still much room for innovation and significant performance improvement for the seemingly mature and stable policies that have been broadly used in the current operating system design

    Optimal caching of large multi-dimensional datasets

    Get PDF
    We propose a novel organization for multi-dimensional data based on the conceptof macro-voxels. This organization improves computer performance by enhancingspatial and temporal locality. Caching of macro-voxels not only reduces therequired storage space but also leads to an efficient organization of the dataset resulting in faster data access. We have developed a macro-voxel caching theory that predicts the optimal macro-voxel sizes required for minimum cache size and access time. The model also identifies a region of trade-off between time and storage, which can be exploited in making an efficient choice of macro-voxel size for this scheme. Based on the macro-voxel caching model, we have implemented a macro-voxel I/O layer in C, intended to be used as an interface between applications and datasets. It is capable of both scattered access, typical in online applications, and row/column access, typical in batched applications. We integrated this I/O layer in the ALIGN program (online application) which aligns images based on 3D distance maps; this improved access time by a factor of 3 when accessing local disks and a factor of 20 for remote disks. We also applied the macro-voxel caching scheme on SPEC.s Seismic (batched application) benchmark datasets which improved the read process by a factor of 8.Ph.D., Electrical and Computer Engineering -- Drexel University, 200

    Efficient Home-Based protocols for reducing asynchronous communication in shared virtual memory systems

    Full text link
    En la presente tesis se realiza una evaluación exhaustiva de ls Sistemas de Memoria Distribuida conocidos como Sistemas de Memoria Virtual Compartida. Este tipo de sistemas posee características que los hacen especialmente atractivos, como son su relativo bajo costo, alta portabilidad y paradigma de progración de memoria compartida. La evaluación consta de dos partes. En la primera se detallan las bases de diseño y el estado del arte de la investigación sobre este tipo de sistemas. En la segunda, se estudia el comportamiento de un conjunto representativo de cargas paralelas respecto a tres ejes de caracterización estrechamente relacionados con las prestaciones en estos sistemas. Mientras que la primera parte apunta la hipótesis de que la comunicación asíncrona es una de las principales causas de pérdida de prestaciones en los Sistemas de Memoria Virtual Compartida, la segunda no sólo la confirma, sino que ofrece un detallado análisis de las cargas del que se obteiene información sobre la potencial comunicación asíncrona atendiendo a diferentes parámetros del sistema. El resultado de la evaluación se utiliza para proponer dos nuevos protocolos para el funcionamiento de estos sistemas que utiliza un mínimo de recursos de hardware, alcanzando prestaciones similares e incluso superiores en algunos casos a sistemas que utilizan circuitos hardware de propósito específico para reducir la comunicación asíncrona. En particular, uno de los protocolos propuestos es comparado con una reconocida técnica hardware para reducir la comunicación asíncrona, obteniendo resultados satisfactorios y complementarios a la técnica comparada. Todos los modelos y técnicas usados en este trabajo han sido implementados y evalados utilizando un nuevo entorno de simulación desarollado en el contexto de este trabajo.Petit Martí, SV. (2003). Efficient Home-Based protocols for reducing asynchronous communication in shared virtual memory systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/2908Palanci
    corecore