6 research outputs found

    Exploiting inter-reference time characteristic of Web cache workload for web cache replacement design

    Get PDF
    Caching objects in Internet environment is aimed to reduce bandwidth consumption and increase the response time of system in term of user perception.Researcher believes that the performance of web cache is dependent on user access behavior.Therefore, many studies on workload of the internet web cache have been conducted.However, in the web cache environment, the study in inter-reference time of the successive requests is rarely.This paper tries to explore the characteristic of workload of the internet web cache, especially on the inter reference time (IRT).Based on the correlation test and trend analysis, it can be concluded that IRT is reasonable to be applied as a metric of web cache replacement policy

    Regularly expected reference-time as a metric of web cache replacement policy

    Get PDF
    The growth of Internet access was increasing significantly.In facts, more than one user access the same object so there is an opportunity to reduce this redundancy by placing an intermediate storage called cache.By this approach, the bandwidth consumption and response time of system in term of user perception can be improved. When the size of web cache is limited, it needs to manage the objects in web cache so that the hit ratio and byte hit ratio are maximized. Based on previous research the performance of cache replacement is dependent on the user/program access behavior.Therefore, the success of IRT implementation in memory cache replacement is not guaranteed a same result for web cache environment.Researcher has explored the regularity of user access and used this characteristic to be included in a metric of web cache replacement. Other researcher uses the regularity to predict the next occurrences and combine with past frequency occurrences.In predicting process, they use statistic or data mining approach.However, it takes time in computing prediction process. Therefore, this paper proposes a simple approach in predicting the next object reference.This approach is based on assumption that the object could be accessed by user regularly such DA-IRT that be used to calculate the time of next object reference called the regularly expected reference time (RERT).The object with longer RERT will be evicted sooner from the web cache.Based on experiment result, the performance of RERT is dependent on user access behavior and opposite of DA-IRT policy

    REGULARLY EXPECTED REFERENCE-TIME AS A METRIC OF WEB CACHE REPLACEMENT POLICY

    Get PDF
    ABSTRACT. The growth of Internet access was increasing significantly. In facts, more than one user access the same object so there is an opportunity to reduce this redundancy by placing an intermediate storage called cache. By this approach, the bandwidth consumption and response time of system in term of user perception can be improved. When the size of web cache is limited, it needs to manage the objects in web cache so that the hit ratio and byte hit ratio are maximized. Based on previous research the performance of cache replacement is dependent on the user/program access behavior. Therefore, the success of IRT implementation in memory cache replacement is not guaranteed a same result for web cache environment. Researcher has explored the regularity of user access and used this characteristic to be included in a metric of web cache replacement. Other researcher uses the regularity to predict the next occurrences and combine with past frequency occurrences. In predicting process, they use statistic or data mining approach. However, it takes time in computing prediction process. Therefore, this paper proposes a simple approach in predicting the next object reference. This approach is based on assumption that the object could be accessed by user regularly such DA-IRT that be used to calculate the time of next object reference called the regularly expected reference time (RERT). The object with longer RERT will be evicted sooner from the web cache. Based on experiment result, the performance of RERT is dependent on user access behavior and opposite of DA-IRT policy

    Cooperative Interval Caching in Clustered Multimedia Servers

    Get PDF
    In this project, we design a cooperative interval caching (CIC) algorithm for clustered video servers, and evaluate its performance through simulation. The CIC algorithm describes how distributed caches in the cluster cooperate to serve a given request. With CIC, a clustered server can accommodate twice (95%) more number of cached streams than the clustered server without cache cooperation. There are two major processes of CIC to find available cache space for a given request in the cluster: to find the server containing the information about the preceding request of the given request; and to find another server which may have available cache space if the current server turns out not to have enough cache space. The performance study shows that it is better to direct the requests of the same movie to the same server so that a request can always find the information of its preceding request from the same server. The CIC algorithm uses scoreboard mechanism to achieve this goal. The performance results also show that when the current server fails to find cache space for a given request, randomly selecting a server works well to find the next server which may have available cache space. The combination of scoreboard and random selection to find the preceding request information and the next available server outperforms other combinations of different approaches by 86%. With CIC, the cooperative distributed caches can support as many cached streams as one integrated cache does. In some cases, the cooperative distributed caches accommodate more number of cached streams than one integrated cache would do. The CIC algorithm makes every server in the cluster perform identical tasks to eliminate any single point of failure, there by increasing availability of the server cluster. The CIC algorithm also specifies how to smoothly add or remove a server to or from the cluster to provide the server with scalability
    corecore