62 research outputs found

    Web Proxy Cache Replacement Policies Using Decision Tree (DT) Machine Learning Technique for Enhanced Performance of Web Proxy

    Get PDF
    Web cache is a mechanism for the temporary storage (caching) of web documents, such as HTML pages and images, to reduce bandwidth usage, server load, and perceived lag. A web cache stores the copies of documents passing through it and any subsequent requests may be satisfied from the cache if certain conditions are met. In this paper, Decision Tree (DT ) a machine learning technique has been used to increase the performance of traditional Web proxy caching policies such as SIZE, and Hybrid. Decision Tree (DT) is used and integrated with traditional Web proxy caching techniques to form better caching approaches known as DT - SIZE and DT - Hybrid. The proposed approaches are evaluated by trace - driven simulation and compared with traditional Web proxy caching techniques. Experimental results have revealed that the proposed DT - SIZE and DT - Hybrid significantly increased Pure Hit - Ratio, Byte Hit - Ratio and reduced the latency when compared with SIZE and Hybrid

    A Web Cache Replacement Strategy for Safety-Critical Systems

    Get PDF
    A Safety-Critical System (SCS), such as a spacecraft, is usually a complex system. It produces a large amount of test data during a comprehensive testing process. The large amount of data is often managed by a comprehensive test data query system. The primary factor affecting the management experience of a comprehensive test data query system is the performance of querying the test data. It is a big challenge to manage and maintain the huge and complex testing data.To address this challenge, a web cache replacement algorithm which can effectively improve the query performance and reduce the network latency is needed. However, a general-purpose web cache replacement algorithm usually cannot be directly applied to this type of system due to the low hit rate and low byte hit rate. In order to improve the hit rate and byte hit rate, a data stream mining technology is introduced, and a new web cache algorithm GDSF-DST (Greedy Dual-Size Frequency with Data Stream Technology) for the Safety-Critical System (SCS) is proposed based on the original GDSF algorithm. The experimental results show that compared with state of the art traditional algorithms, GDSF-DST achieves competitive performance and improves the hit rate and byte hit rate by about 20%

    Intelligent cooperative web caching policies for media objects based on J48 decision tree and Naive bayes supervised machine learning algorithms in structured peer-to-peer systems

    Get PDF
    Web caching plays a key role in delivering web items to end users in World Wide Web (WWW).On the other hand, cache size is considered as a limitation of web caching.Furthermore, retrieving the same media object from the origin server many times consumes the network bandwidth. Furthermore, full caching for media objects is not a practical solution and consumes cache storage in keeping few media objects because of its limited capacity. Moreover, traditional web caching policies such as Least Recently Used (LRU), Least Frequently Used (LFU), and Greedy Dual Size (GDS) suffer from caching pollution (i.e. media objects that are stored in the cache are not frequently visited which negatively affects on the performance of web proxy caching). In this work, intelligent cooperative web caching approaches based on J48 decision tree and Naïve Bayes (NB) supervised machine learning algorithms are presented. The proposed approaches take the advantages of structured peer-to-peer systems where the contents of peers’ caches are shared using Distributed Hash Table (DHT) in order to enhance the performance of the web caching policy. The performance of the proposed approaches is evaluated by running a trace-driven simulation on a dataset that is collected from IRCache network. The results demonstrate that the new proposed policies improve the performance of traditional web caching policies that are LRU, LFU, and GDS in terms of Hit Ratio (HR) and Byte Hit Ratio (BHR). Moreover, the results are compared to the most relevant and state-of-the-art web proxy caching policies. Ratio (HR) and Byte Hit Ratio (BHR). Moreover, the results are compared to the most relevant and state-of-the-art web proxy caching policies

    QoE over-the-top multimédia em redes sem fios

    Get PDF
    One of the goals of an operator is to improve the Quality of Experience (QoE) of a client in networks where Over-the-top (OTT) content is being delivered. The appearance of services like YouTube, Netflix or Twitch, where in the first case it contains more than 300 hours of video per minute in the platform, brings issues to the managed data networks that already exist, as well as challenges to fix them. Video traffic corresponds to 75% of the whole transmitted data on the Internet. This way, not only the Internet did become the ’de facto’ video transmission path, but also the general data traffic continues to exponentially increase, due to the desire to consume more content. This thesis presents two model proposals and architecture that aim to improve the users’ quality of experience, by predicting the amount of video in advance liable of being prefetched, as a way to optimize the delivery efficiency where the quality of service cannot be guaranteed. The prefetch is done in the clients’ closest cache server. For that, an Analytic Hierarchy Process (AHP) is used, where through a subjective method of attribute comparison, and from the application of a weighted function on the measured quality of service metrics, the amount of prefetch is achieved. Besides this method, artificial intelligence techniques are also taken into account. With neural networks, there is an attempt of selflearning with the behavior of OTT networks with more than 14.000 hours of video consumption under different quality conditions, to try to estimate the experience felt and maximize it, without the normal service delivery degradation. At last, both methods are evaluated and a proof of concept is made with users in a high speed train.Um dos objetivos de um operador é melhorar a qualidade de experiência do cliente em redes onde existem conteúdos Over-the-top (OTT) a serem entregues. O aparecimento de serviços como o YouTube, Netflix ou Twitch, onde no primeiro caso são carregadas mais de 300 horas de vídeo por minuto na plataforma, vem trazer problemas às redes de dados geridas que já existiam, assim como desafios para os resolver. O tráfego de vídeo corresponde a 75% de todos os dados transmitidos na Internet. Assim, não só a Internet se tornou o meio de transmissão de vídeo ’de facto’, como o tráfego de dados em geral continua a crescer exponencialmente, proveniente do desejo de consumir mais conteúdos. Esta tese apresenta duas propostas de modelos e arquitetura que pretendem melhorar a qualidade de experiência do utilizador, ao prever a quantidade de vídeo em avanço passível de ser précarregado, de forma a optimizar a eficiência de entrega das redes onde a qualidade de serviço não é possível de ser garantida. O pré-carregamento dos conteúdos é feito no servidor de cache mais próximo do cliente. Para tal, é utilizado um processo analítico hierárquico (AHP), onde através de um método subjetivo de comparação de atributos, e da aplicação de uma função de valores ponderados nas medições das métricas de qualidade de serviço, é obtida a quantidade a pré-carregar. Além deste método, é também proposta uma abordagem com técnicas de inteligência artificial. Através de redes neurais, há uma tentativa de auto-aprendizagem do comportamento das redes OTT com mais de 14.000 horas de consumo de vídeo sobre diferentes condições de qualidade, para se tentar estimar a experiência sentida e maximizar a mesma, sem degradação da entrega de serviço normal. No final, ambos os métodos propostos são avaliados num cenário de utilizadores num comboio a alta velocidade.Mestrado em Engenharia de Computadores e Telemátic

    Web pre-fetching schemes using Machine Learning for Mobile Cloud Computing

    Get PDF
    Pre-fetching is one of the technologies used in reducing latency on network traffic on the Internet. We propose this technology to utilise Mobile Cloud Computing (MCC) environment to handle latency issues in context of data management. However, overaggressive use of the pre-fetching technique causes overhead and slows down the system performance since pre-fetching the wrong objects data wastes the storage capacity of a mobile device. Many studies have been using Machine Learning (ML) to solve such issues. However, in MCC environment, the pre-fetching using ML is not widely used. Therefore, this research aims to implement ML techniques to classify the web objects that require decision rules. These decision rules are generated using few ML algorithms such as J48, Random Tree (RT), Naive Bayes (NB) and Rough Set (RS).These rules represent the characteristics of the input data accordingly. The experimental results reveal that J48 performs well in classifying the web objects for all three different datasets with testing accuracy of 95.49%, 98.28% and 97.9% for the UTM blog data, IRCache, and Proxy Cloud Computing (CC) datasets respectively. It shows that J48 algorithm is capable to handle better cloud data management with good recommendation to users with or without the cloud storage

    Cacheability study for web content delivery

    Get PDF
    Master'sMASTER OF SCIENC

    Evaluation, Analysis and adaptation of web prefetching techniques in current web

    Full text link
    Abstract This dissertation is focused on the study of the prefetching technique applied to the World Wide Web. This technique lies in processing (e.g., downloading) a Web request before the user actually makes it. By doing so, the waiting time perceived by the user can be reduced, which is the main goal of the Web prefetching techniques. The study of the state of the art about Web prefetching showed the heterogeneity that exists in its performance evaluation. This heterogeneity is mainly focused on four issues: i) there was no open framework to simulate and evaluate the already proposed prefetching techniques; ii) no uniform selection of the performance indexes to be maximized, or even their definition; iii) no comparative studies of prediction algorithms taking into account the costs and benefits of web prefetching at the same time; and iv) the evaluation of techniques under very different or few significant workloads. During the research work, we have contributed to homogenizing the evaluation of prefetching performance by developing an open simulation framework that reproduces in detail all the aspects that impact on prefetching performance. In addition, prefetching performance metrics have been analyzed in order to clarify their definition and detect the most meaningful from the user's point of view. We also proposed an evaluation methodology to consider the cost and the benefit of prefetching at the same time. Finally, the importance of using current workloads to evaluate prefetching techniques has been highlighted; otherwise wrong conclusions could be achieved. The potential benefits of each web prefetching architecture were analyzed, finding that collaborative predictors could reduce almost all the latency perceived by users. The first step to develop a collaborative predictor is to make predictions at the server, so this thesis is focused on an architecture with a server-located predictor. The environment conditions that can be found in the web are alsDoménech I De Soria, J. (2007). Evaluation, Analysis and adaptation of web prefetching techniques in current web [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1841Palanci

    Long Short Term Based Memory Hardware Prefetcher

    Get PDF
    Hardware prefetching is an efficient way to hide cache miss penalty due to long memory access latency. Accuracy, coverage, and timeliness are three primary metrics in evaluating hardware prefetcher design. Highly accurate hardware prefetches are required to predict complex memory access patterns in multicore systems. In this paper, we propose a long short term memory (LSTM) prefetcher---a neural network based hardware prefetcher to achieve high prefetch accuracy and coverage while improving prefetch timeliness. The proposed LSTM prefetcher achieves higher accuracy and coverage by training neural networks to predict long memory access patterns. LSTM can improve timeliness in two approaches. First, multiple prefetch can be issued on a single cache access. Second, a simple Next-N-Line prefetcher is integrated with the LSTM prefetcher to accelerate predictions when good spatial locality exists. The proposed LSTM prefetcher is the first prefetcher design that uses recurrent neuron network. Three case studies are presented, which show that proposed LSTM prefetcher can achieve 98.6\%, 83.5\%, and 61\% accuracy respectively, while the state-of-art variable length delta prefetcher (VLDP) achieves 0\%, 75\% ,and 26.6\% accuracy in predicting the sequences in the case studies
    corecore