269 research outputs found

    Web Proxy Cache Replacement Policies Using Decision Tree (DT) Machine Learning Technique for Enhanced Performance of Web Proxy

    Get PDF
    Web cache is a mechanism for the temporary storage (caching) of web documents, such as HTML pages and images, to reduce bandwidth usage, server load, and perceived lag. A web cache stores the copies of documents passing through it and any subsequent requests may be satisfied from the cache if certain conditions are met. In this paper, Decision Tree (DT ) a machine learning technique has been used to increase the performance of traditional Web proxy caching policies such as SIZE, and Hybrid. Decision Tree (DT) is used and integrated with traditional Web proxy caching techniques to form better caching approaches known as DT - SIZE and DT - Hybrid. The proposed approaches are evaluated by trace - driven simulation and compared with traditional Web proxy caching techniques. Experimental results have revealed that the proposed DT - SIZE and DT - Hybrid significantly increased Pure Hit - Ratio, Byte Hit - Ratio and reduced the latency when compared with SIZE and Hybrid

    Web proxy caching: Understanding and technological assimilation

    Get PDF
    Los proveedores de acceso a Internet usualmente incluyen el concepto de aceleradores de Internet para reducir el tiempo promedio que tarda un navegador en obtener los archivos solicitados. Para los administradores del sistema es difícil elegir la configuración del servidor proxy caché, ya que es necesario decidir los valores que se deben usar en diferentes variables. En este artículo se presenta la forma como se abordó el proceso de comprensión y asimilación tecnológica del servicio de proxy caché, un servicio de alto impacto organizacional. Además, este artículo es producto del proyecto de investigación “Análisis de configuraciones de servidores proxy caché”, en el cual se estudiaron aspectos relevantes del rendimiento de Squid como servidor proxy cachéInternet Service Providers (ISPs) usually include the concept of internet accelerators to re-duce browser’s response time for obtaining requested files. For system’s administrators is difficult to choose the proxy cache server configuration, considering that it is necessary to decide the values to be used in different variables. this article sets out the way in which the process of understanding and technology assimilation of the proxy cache server was address, which in turn is a high-impact organizational service. Moreover, this article is a product from the research project titled “Configuration Analysis of Proxy Cache Servers”, where relevant aspects of Squid as proxy cache server were studie

    Cooperative web proxy caching for media objects based on peer-to-peer systems

    Get PDF
    Web proxy caches are used to improve the performance of the World Wide Web (WWW). Many advantages can be gathered from caching such as improving the hit rates, reducing network traffic, and alleviating loads on origin servers. On the other hand, retrieving the same object many times consumes the network bandwidth. Thus, in order to overcome this Imitation, in this work, a cooperative web caching approach for media objects based on peer-to-peer systems is proposed. Tow performance metrics are used that are Hit Ratio (HR) and Byte Hit Ratio (BHR). A simulation is carried out to study the affects of cooperative caching on the performance of web proxy caching policies. The results show that cooperative caching improves the performance of web proxy caching policies in delivering media objects

    Intelligent cooperative web caching policies for media objects based on J48 decision tree and Naive bayes supervised machine learning algorithms in structured peer-to-peer systems

    Get PDF
    Web caching plays a key role in delivering web items to end users in World Wide Web (WWW).On the other hand, cache size is considered as a limitation of web caching.Furthermore, retrieving the same media object from the origin server many times consumes the network bandwidth. Furthermore, full caching for media objects is not a practical solution and consumes cache storage in keeping few media objects because of its limited capacity. Moreover, traditional web caching policies such as Least Recently Used (LRU), Least Frequently Used (LFU), and Greedy Dual Size (GDS) suffer from caching pollution (i.e. media objects that are stored in the cache are not frequently visited which negatively affects on the performance of web proxy caching). In this work, intelligent cooperative web caching approaches based on J48 decision tree and Naïve Bayes (NB) supervised machine learning algorithms are presented. The proposed approaches take the advantages of structured peer-to-peer systems where the contents of peers’ caches are shared using Distributed Hash Table (DHT) in order to enhance the performance of the web caching policy. The performance of the proposed approaches is evaluated by running a trace-driven simulation on a dataset that is collected from IRCache network. The results demonstrate that the new proposed policies improve the performance of traditional web caching policies that are LRU, LFU, and GDS in terms of Hit Ratio (HR) and Byte Hit Ratio (BHR). Moreover, the results are compared to the most relevant and state-of-the-art web proxy caching policies. Ratio (HR) and Byte Hit Ratio (BHR). Moreover, the results are compared to the most relevant and state-of-the-art web proxy caching policies

    Intelligent Cooperative Adaptive Weight Ranking Policy via dynamic aging based on NB and J48 classifiers

    Get PDF
    The increased usage of World Wide Web leads to increase in network traffic and create a bottleneck over the internet performance.  For most people, the accessing speed or the response time is the most critical factor when using the internet. Reducing response time was done by using web proxy cache technique that storing a copy of pages between client and server sides. If requested pages are cached in the proxy, there is no need to access the server. But, the cache size is limited, so cache replacement algorithms are used to remove pages from the cache when it is full. On the other hand, the conventional algorithms for replacement such as Least Recently Use (LRU), First in First Out (FIFO), Least Frequently Use (LFU), Randomised Policy, etc. may discard essential pages just before use. Furthermore, using conventional algorithms cannot be well optimized since it requires some decision to evict intelligently before a page is replaced. Hence, this paper proposes an integration of Adaptive Weight Ranking Policy (AWRP) with intelligent classifiers (NB-AWRP-DA and J48-AWRP-DA) via dynamic aging factor.  To enhance classifiers power of prediction before integrating them with AWRP, particle swarm optimization (PSO) automated wrapper feature selection methods are used to choose the best subset of features that are relevant and influence classifiers prediction accuracy.   Experimental Result shows that NB-AWRP-DA enhances the performance of web proxy cache across multi proxy datasets by 4.008%,4.087% and 14.022% over LRU, LFU, and FIFO while, J48-AWRP-DA increases HR by 0.483%, 0.563% and 10.497% over LRU, LFU, and FIFO respectively.  Meanwhile, BHR of NB-AWRP-DA rises by 0.9911%,1.008% and 11.5842% over LRU, LFU, and FIFO respectively while 0.0204%, 0.0379% and 10.6136 for LRU, LFU, FIFO respectively using J48-AWRP-DA

    Basis Token Consistency: A Practical Mechanism for Strong Web Cache Consistency

    Full text link
    With web caching and cache-related services like CDNs and edge services playing an increasingly significant role in the modern internet, the problem of the weak consistency and coherence provisions in current web protocols is becoming increasingly significant and drawing the attention of the standards community [LCD01]. Toward this end, we present definitions of consistency and coherence for web-like environments, that is, distributed client-server information systems where the semantics of interactions with resource are more general than the read/write operations found in memory hierarchies and distributed file systems. We then present a brief review of proposed mechanisms which strengthen the consistency of caches in the web, focusing upon their conceptual contributions and their weaknesses in real-world practice. These insights motivate a new mechanism, which we call "Basis Token Consistency" or BTC; when implemented at the server, this mechanism allows any client (independent of the presence and conformity of any intermediaries) to maintain a self-consistent view of the server's state. This is accomplished by annotating responses with additional per-resource application information which allows client caches to recognize the obsolescence of currently cached entities and identify responses from other caches which are already stale in light of what has already been seen. The mechanism requires no deviation from the existing client-server communication model, and does not require servers to maintain any additional per-client state. We discuss how our mechanism could be integrated into a fragment-assembling Content Management System (CMS), and present a simulation-driven performance comparison between the BTC algorithm and the use of the Time-To-Live (TTL) heuristic.National Science Foundation (ANI-9986397, ANI-0095988

    Intelligent cooperative web caching policies for media objects based on J48 decision tree and naïve Bayes supervised machine learning algorithms in structured peer-to-peer systems

    Get PDF
    Web caching plays a key role in delivering web items to end users in World Wide Web (WWW). On the other hand, cache size is considered as a limitation of web caching. Furthermore, retrieving the same media object from the origin server many times consumes the network bandwidth. Furthermore, full caching for media objects is not a practical solution and consumes cache storage in keeping few media objects because of its limited capacity. Moreover, traditional web caching policies such as Least Recently Used (LRU), Least Frequently Used (LFU), and Greedy Dual Size (GDS) suffer from caching pollution (i.e. media objects that are stored in the cache are not frequently visited which negatively affects on the performance of web proxy caching). In this work, intelligent cooperative web caching approaches based on J48 decision tree and Naïve Bayes (NB) supervised machine learning algorithms are presented. The proposed approaches take the advantages of structured peer-to-peer systems where the contents of peers’ caches are shared using Distributed Hash Table (DHT) in order to enhance the performance of the web caching policy. The performance of the proposed approaches is evaluated by running a trace-driven simulation on a dataset that is collected from IRCache network. The results demonstrate that the new proposed policies improve the performance of traditional web caching policies that are LRU, LFU, and GDS in terms of Hit Ratio (HR) and Byte Hit Ratio (BHR). Moreover, the results are compared to the most relevant and state-of-the-art web proxy caching policies

    2 P2P or Not 2 P2P?

    Full text link
    In the hope of stimulating discussion, we present a heuristic decision tree that designers can use to judge the likely suitability of a P2P architecture for their applications. It is based on the characteristics of a wide range of P2P systems from the literature, both proposed and deployed.Comment: 6 pages, 1 figur
    corecore