80 research outputs found

    Rethinking Distributed Caching Systems Design and Implementation

    Get PDF
    Distributed caching systems based on in-memory key-value stores have become a crucial aspect of fast and efficient content delivery in modern web-applications. However, due to the dynamic and skewed execution environments and workloads, under which such systems typically operate, several problems arise in the form of load imbalance. This thesis addresses the sources of load imbalance in caching systems, mainly: i) data placement, which relates to distribution of data items across servers and ii) data item access frequency, which describes amount of requests each server has to process, and how each server is able to cope with it. Thus, providing several strategies to overcome the sources of imbalance in isolation. As a use case, we analyse Memcached, its variants, and propose a novel solution for distributed caching systems. Our solution revolves around increasing parallelism through load segregation, and solutions to overcome the load discrepancies when reaching high saturation scenarios, mostly through access re-arrangement, and internal replication.Os sistemas de cache distribuídos baseados em armazenamento de pares chave-valor em RAM, tornaram-se um aspecto crucial em aplicações web modernas para o fornecimento rápido e eficiente de conteúdo. No entanto, estes sistemas normalmente estão sujeitos a ambientes muito dinâmicos e irregulares. Este tipo de ambientes e irregularidades, causa vários problemas, que emergem sob a forma de desequilíbrios de carga. Esta tese aborda as diferentes origens de desequilíbrio de carga em sistemas de caching distribuído, principalmente: i) colocação de dados, que se relaciona com a distribuição dos dados pelos servidores e a ii) frequência de acesso aos dados, que reflete a quantidade de pedidos que cada servidor deve processar e como cada servidor lida com a sua carga. Desta forma, demonstramos várias estratégias para reduzir o impacto proveniente das fontes de desequilíbrio, quando analizadas em isolamento. Como caso de uso, analisamos o sistema Memcached, as suas variantes, e propomos uma nova solução para sistemas de caching distribuídos. A nossa solução gira em torno de aumento de paralelismo atraves de segregação de carga e em como superar superar as discrepâncias de carga a quando de sistema entra em grande saturação, principalmente atraves de reorganização de acesso e de replicação intern

    Quality-driven management of video streaming services in segment-based cache networks

    Get PDF

    Cache Memory: An Analysis on Replacement Algorithms and Optimization Techniques

    Get PDF
    Caching strategies can improve the overall performance of a system by allowing the fast processor and slow memory to at a same pace. One important factor in caching is the replacement policy. Advancement in technology results in evolution of a huge number of techniques and algorithms implemented to improve cache performance. In this paper, analysis is done on different cache optimization techniques as well as replacement algorithms. Furthermore this paper presents a comprehensive statistical comparison of cache optimization techniques.To the best of our knowledge there is no numerical measure which can tell us the rating of specific cache optimization technique. We tried to come up with such a numerical figure. By statistical comparison we find out which technique is more consistent among all others. For said purpose we calculated mean and CV (Coefficient of Variation). CV tells us about which technique is more consistent. Comparative analysis of different techniques shows that victim cache has more consistent technique among all

    Boustrophedonic Frames: Quasi-Optimal L2 Caching for Textures in GPUs

    Get PDF
    © 2023 Copyright held by the owner/author(s). This document is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/ This document is the Accepted version of a Published Work that appeared in final form in 32nd International Conference on Parallel Architectures and Compilation Techniques (PACT), Viena, Austria, October 2023. To access the final edited and published work see https://doi.org/10.1109/PACT58117.2023.00019Literature is plentiful in works exploiting cache locality for GPUs. A majority of them explore replacement or bypassing policies. In this paper, however, we surpass this exploration by fabricating a formal proof for a no-overhead quasi-optimal caching technique for caching textures in graphics workloads. Textures make up a significant part of main memory traffic in mobile GPUs, which contributes to the total GPU energy consumption. Since texture accesses use a shared L2 cache, improving the L2 texture caching efficiency would decrease main memory traffic, thus improving energy efficiency, which is crucial for mobile GPUs. Our proposal reaches quasi-optimality by exploiting the frame-to-frame reuse of textures in graphics. We do this by traversing frames in a boustrophedonic1 manner w.r.t. the frame-to-frame tile order. We first approximate the texture access trace to a circular trace and then forge a formal proof for our proposal being optimal for such traces. We also complement the proof with empirical data that demonstrates the quasi-optimality of our no-cost proposal

    Resource Management in Multi-Access Edge Computing (MEC)

    Get PDF
    This PhD thesis investigates the effective ways of managing the resources of a Multi-Access Edge Computing Platform (MEC) in 5th Generation Mobile Communication (5G) networks. The main characteristics of MEC include distributed nature, proximity to users, and high availability. Based on these key features, solutions have been proposed for effective resource management. In this research, two aspects of resource management in MEC have been addressed. They are the computational resource and the caching resource which corresponds to the services provided by the MEC. MEC is a new 5G enabling technology proposed to reduce latency by bringing cloud computing capability closer to end-user Internet of Things (IoT) and mobile devices. MEC would support latency-critical user applications such as driverless cars and e-health. These applications will depend on resources and services provided by the MEC. However, MEC has limited computational and storage resources compared to the cloud. Therefore, it is important to ensure a reliable MEC network communication during resource provisioning by eradicating the chances of deadlock. Deadlock may occur due to a huge number of devices contending for a limited amount of resources if adequate measures are not put in place. It is crucial to eradicate deadlock while scheduling and provisioning resources on MEC to achieve a highly reliable and readily available system to support latency-critical applications. In this research, a deadlock avoidance resource provisioning algorithm has been proposed for industrial IoT devices using MEC platforms to ensure higher reliability of network interactions. The proposed scheme incorporates Banker’s resource-request algorithm using Software Defined Networking (SDN) to reduce communication overhead. Simulation and experimental results have shown that system deadlock can be prevented by applying the proposed algorithm which ultimately leads to a more reliable network interaction between mobile stations and MEC platforms. Additionally, this research explores the use of MEC as a caching platform as it is proclaimed as a key technology for reducing service processing delays in 5G networks. Caching on MEC decreases service latency and improve data content access by allowing direct content delivery through the edge without fetching data from the remote server. Caching on MEC is also deemed as an effective approach that guarantees more reachability due to proximity to endusers. In this regard, a novel hybrid content caching algorithm has been proposed for MEC platforms to increase their caching efficiency. The proposed algorithm is a unification of a modified Belady’s algorithm and a distributed cooperative caching algorithm to improve data access while reducing latency. A polynomial fit algorithm with Lagrange interpolation is employed to predict future request references for Belady’s algorithm. Experimental results show that the proposed algorithm obtains 4% more cache hits due to its selective caching approach when compared with case study algorithms. Results also show that the use of a cooperative algorithm can improve the total cache hits up to 80%. Furthermore, this thesis has also explored another predictive caching scheme to further improve caching efficiency. The motivation was to investigate another predictive caching approach as an improvement to the formal. A Predictive Collaborative Replacement (PCR) caching framework has been proposed as a result which consists of three schemes. Each of the schemes addresses a particular problem. The proactive predictive scheme has been proposed to address the problem of continuous change in cache popularity trends. The collaborative scheme addresses the problem of cache redundancy in the collaborative space. Finally, the replacement scheme is a solution to evict cold cache blocks and increase hit ratio. Simulation experiment has shown that the replacement scheme achieves 3% more cache hits than existing replacement algorithms such as Least Recently Used, Multi Queue and Frequency-based replacement. PCR algorithm has been tested using a real dataset (MovieLens20M dataset) and compared with an existing contemporary predictive algorithm. Results show that PCR performs better with a 25% increase in hit ratio and a 10% CPU utilization overhead

    TCOR: a tile cache with optimal replacement

    Get PDF
    Cache Replacement Policies are known to have an important impact on hit rates. The OPT replacement policy [27] has been formally proven as optimal for minimizing misses. Due to its need to look far ahead for future memory accesses, it is often reduced to a yardstick for measuring the efficacy of other practical caches. In this paper, we bring the OPT to life, in architectures for mobile GPUs, for which energy efficiency is of great consequence. We also mold other factors in the memory hierarchy to enhance its impact. The end results are a 13.8% decrease in the memory hierarchy energy consumption and an increased throughput in the Tiling Engine. We also observe a 5.5% decrease in the total GPU energy and a 3.7% increase in frames per second (FPS).This work has been supported by the CoCoUnit ERC Advanced Grant of the EU’s Horizon 2020 program (grant No 833057), the Spanish State Research Agency (MCIN/AEI) under grant PID2020-113172RB-I00, the ICREA Academia program and the AGAUR grant 2020-FISDU-00287. We would also like to thank the anonymous reviewers for their valuable comments.Peer ReviewedPostprint (author's final draft

    Building Internet caching systems for streaming media delivery

    Get PDF
    The proxy has been widely and successfully used to cache the static Web objects fetched by a client so that the subsequent clients requesting the same Web objects can be served directly from the proxy instead of other sources faraway, thus reducing the server\u27s load, the network traffic and the client response time. However, with the dramatic increase of streaming media objects emerging on the Internet, the existing proxy cannot efficiently deliver them due to their large sizes and client real time requirements.;In this dissertation, we design, implement, and evaluate cost-effective and high performance proxy-based Internet caching systems for streaming media delivery. Addressing the conflicting performance objectives for streaming media delivery, we first propose an efficient segment-based streaming media proxy system model. This model has guided us to design a practical streaming proxy, called Hyper-Proxy, aiming at delivering the streaming media data to clients with minimum playback jitter and a small startup latency, while achieving high caching performance. Second, we have implemented Hyper-Proxy by leveraging the existing Internet infrastructure. Hyper-Proxy enables the streaming service on the common Web servers. The evaluation of Hyper-Proxy on the global Internet environment and the local network environment shows it can provide satisfying streaming performance to clients while maintaining a good cache performance. Finally, to further improve the streaming delivery efficiency, we propose a group of the Shared Running Buffers (SRB) based proxy caching techniques to effectively utilize proxy\u27s memory. SRB algorithms can significantly reduce the media server/proxy\u27s load and network traffic and relieve the bottlenecks of the disk bandwidth and the network bandwidth.;The contributions of this dissertation are threefold: (1) we have studied several critical performance trade-offs and provided insights into Internet media content caching and delivery. Our understanding further leads us to establish an effective streaming system optimization model; (2) we have designed and evaluated several efficient algorithms to support Internet streaming content delivery, including segment caching, segment prefetching, and memory locality exploitation for streaming; (3) having addressed several system challenges, we have successfully implemented a real streaming proxy system and deployed it in a large industrial enterprise

    Improving the SLLC Efficiency by exploiting reuse locality and adjusting prefetch

    Get PDF
    Desde los teléfonos móviles inteligentes hasta nuestro ordenador portátil los sistemas electrónicos que incluyen chips multiprocesador (CMP) están presentes en nuestra vida cotidiana de una manera abrumadora. Los CMPs contienen varios núcleos o CPUs que tienen que ser alimentados con datos provenientes de la memoria. Pero la velocidad a la que los núcleos que forman el CMP necesitan los datos es mucho mayor que la velocidad a la que la memoria es capaz de proporcionar dichos datos. De hecho, esta diferencia ha ido aumentando desde prácticamente el día en el que ambos dispositivos fueron concebidos. Esta diferencia en el rendimiento de ambos dispositivos se ha venido a llamar "the memory gap". Al mismo tiempo que dicha diferencia aumentaba, los lenguajes de programación proporcionaban a los programadores modelos de memoria que podían acceder a un espacio prácticamente infinito y al que, además, se accedía de manera instantánea. Pero el tamaño de cualquier estructura hardware está íntimamente relacionado con su tiempo de acceso y éste será mayor cuanto mayor sea el tamaño la estructura hardware a acceder. Con el ánimo de deshacer esta aparente contradicción, los arquitectos de computadores incluyeron memorias intermedias entre las CPUs y la grande, aunque al mismo tiempo lenta, memoria principal. Estas memorias intermedias se denominan memorias cache o simplemente caches. Debido a la gran diferencia que existe entre la velocidad del procesador y la de la memoria principal. Los CMPs en la actualidad están provistos de una jerarquía de memorias cache que tiene dos o tres niveles. Las caches que están cerca del procesador sólo contienen unos pocos kilobytes (entre 4 y 64) accesibles en uno o pocos ciclos de reloj, mientras que las que se encuentran más alejadas del procesador pueden llegar a contener varios megabytes y tener un tiempo de acceso de varias decenas de ciclos. Los programas al ser ejecutados muestran una propiedad llamada localidad que se expresa en los ejes espacial y temporal. La localidad temporal es la propiedad que dice que el programa volverá a usar datos que usó recientemente, cuanto más recientemente los usó, más probable es que vuelva a hacerlo. Mientras que la localidad espacial es la propiedad que dice que el programa tenderá a usar datos que están próximos en el espacio de memoria a datos que usó recientemente. Las memorias cache han sido diseñadas tradicionalmente para explotar la localidad. En concreto, la localidad temporal se explotaba mediante una adecuada política de reemplazo, mientras que la localidad espacial se explota al contener cada bloque de cache varios datos o palabras. Un modo adicional de conseguir explotar una mayor cantidad de localidad espacial es mediante el uso de la técnica llamada prebúsqueda. La política de reemplazo influye de manera crítica en la tasa de aciertos de la memoria cache. En un CMP provisto de una jerarquía de memorias cache, la localidad temporal se explota en aquellos niveles más cercanos a los núcleos. Así que muchos de los bloques insertados en la SLLC son de un solo uso, es decir, estos bloques no experimentarán ningún acierto más durante todo el tiempo que permanezcan en la SLLC. Sin embargo, aquellos bloques que lleguen a experimentar un acierto en la SLLC, normalmente experimentarán muchos más aciertos. Por lo tanto, que la política de reemplazo base sus decisiones en la posible explotación de la localidad temporal, es una asunción inválida cuando hablamos de la SLLC. Por el contrario, Este comportamiento indica que dicha política de reemplazo de la SLLC debería estar basada en el reúso1 en lugar de en la localidad temporal. La prebúsqueda hardware tiene por objetivo cargar en la cache datos antes de que sea el procesador quien los pida. La validez de esta técnica a la hora de reducir la latencia media de acceso a memoria ha sido ampliamente demostrada. La prebúsqueda funciona especialmente bien en las jerarquías de memoria de sistemas monoprocesador, donde solamente hay un flujo de datos entre el procesador y la memoria. Sin embargo, cuando la prebúsqueda se usa en un sistema multiprocesador donde diferentes aplicaciones se están ejecutando al mismo tiempo, las prebúsquedas asociadas a un núcleo podrían interferir con los datos cargados en la cache por otro núcleo, provocando la eliminación de los contenidos de otra aplicación y dañando su rendimiento. Es necesario por tanto un mecanismo para regular la prebúsqueda asociada a cada uno de los núcleos. Este mecanismo debería tener por objetivo el mejorar el rendimiento general del sistema. 1 Aunque el DRAE no contenga su definición, usaremos aquí el verbo reusar (así como sus formas derivadas) como sinónimo de volver a utilizar. Cada fallo en la SLLC provoca un acceso a la memoria principal que se encuentra fuera del chip. Además la memoria principal está hecha de chips de DRAM. Ambos factores incrementan su latencia de acceso, latencia que se suma a cada uno de los accesos que falla en la SLLC, penalizando a la vez la latencia media de acceso a memoria. Por lo tanto, la tasa de aciertos de la SLLC es un factor crítico para lograr una latencia media de acceso a memoria óptima. Esta tesis fija su atención en la eficiencia de los dos aspectos comentados con anterioridad: la eficiencia de la prebúsqueda y la eficiencia de la política de reemplazo. Las contribuciones principales de esta tesis son las siguientes: 1) Enunciamos una propiedad llamada localidad de reúso que dice que i) los bloques de cache que hayan sido usados más de una vez tienen una alta probabilidad de ser usados muchas veces en el futuro. ii) Los bloques de cache recientemente reusados son más útiles que otros reúsados previamente. Defendemos en esta tesis que el patrón de acceso a la SLLC muestra localidad de reúso. 2) En esta tesis se proponen dos algoritmos de reemplazo capaces de explotar la localidad de reúso, Least-recently reused (LRR) y Not-recently reused (NRR). Estos dos nuevos algoritmos son modificaciones de otros dos muy bien conocidos: Least-recently used (LRU) y Not-recently used (NRU). Dichos algoritmos fueron diseñados para explotar la localidad temporal, mientras que los nuestros explotan la local- idad de reúso. Las modificaciones propuestas no suponen ninguna sobrecarga hardware respecto a los algoritmos base. Durante esta tesis se muestra que nuestros algoritmos mejoran consistentemente el rendimiento de los originales. 3) Proponemos un novedoso diseño para la SLLC llamado Reuse Cache. En este diseño los arrays de etiquetas y datos de la cache están desacoplados. Solamente se almacenan en el array de datos aquellos bloques que hayan mostrado reúso. El array de etiquetas se usa para detectar reúso y mantener la coherencia. Esta estructura permite reducir el tamaño del array de datos de manera drástica. Como ejemplo, una Reuse Cache con un array de etiquetas equivalente al de una cache convencional de 4MB y un array de datos de 1MB, tiene el mismo rendimiento medio que una cache convencional de 8MB, pero con un ahorro de almacenamiento de en torno al 84%. 4) Un controlador de bajo coste llamado ABS capaz de ajustar la agresividad de la prebúsqueda asociada a cada uno de los núcleos de un CMP pero con el ánimo de mejorar el rendimiento general del sistema. El controlador funciona de manera aislada en cada uno de los bancos de la SLLC y recoge métricas locales. Para optimizar el rendimiento global del sistema busca la combinación óptima de valores de la agresividad de prebúsqueda. Para inferir cuál es esa combinación óptima usa una estrategia de búsqueda hill-climbing

    Queuing Modelling and Performance Analysis of Content Transfer in Information Centric Networks

    Get PDF
    With the rapid development of multimedia services and wireless technology, new generation of network traffic like short-form video and live streaming have put tremendous pressure on the current network infrastructure. To meet the high bandwidth and low latency needs of this new generation of traffic, the focus of Internet architecture has moved from host-centric end-to-end communication to requester-driven content retrieval. This shift has motivated the development of Information-Centric Networking (ICN), a promising new paradigm for the future Internet. ICN aims to improve information retrieval on the Internet by identifying and routing data using unified names. In-network caching and the use of a pending interest table (PIT) are two key features of ICN that are designed to efficiently handle bulk data dissemination and retrieval, as well as reduce bandwidth consumption. Performance analysis has been and continues to be key research interests of ICN. This thesis starts with the evaluation of content delivery delays in ICN. The main component of delay is composed of propagation delay, transmission delay,processing delay and queueing delay. To characterize the main components of content delivery delay, queueing network theory has been exploited to coordinate with cache miss rate in modelling the content delivery time in ICN. Moreover, different topologies and network conditions have been taken into account to evaluate the performance of content transfer in ICN. ICN is intrinsically compatible with wireless networks. To evaluate the performance of content transfer in wireless networks, an analytical model to evaluate the mean service time based on consumer and provider mobility has been proposed. The accuracy of the analytical model is validated through extensive simulation experiments. Finally, the analytical model is used to evaluate the impact of key metrics, such as the cache size, content size and content popularity on the performance of PIT and content transfer in ICN. Pending interest table (PIT) is one of the essential components of the ICN forwarding plane, which is responsible for stateful routing in ICN. It also aggregates the same interests to alleviate request flooding and network congestion. The aggregation feature of PIT improves performance of content delivery in ICN. Thus, having an analytical model to characterize the impact of PIT on content delivery time could allow for a more precise evaluation of content transfer performance. In parallel, if the size of the PIT is not properly determined, the interest drop rate may be too high, resulting in a reduction in quality of service for consumers as their requests have to be retransmitted. Furthermore, PIT is a costly resource as it requires to operate at wirespeed in the forwarding plane. Therefore, in order to ensure that interests drop rate less than the requirement, an analytical model of PIT occupancy has been developed to determine the minimum PIT size. In this thesis, the proposed analytical models are used to efficiently and accurately evaluate the performance of ICN content transfer and investigate the key component of ICN forwarding plane. Leveraging the insights discovered by these analytical models, the minimal PIT size and proper interest timeout can be determined to enhance the performance of ICN. To widen the outcomes achieved in the thesis, several interesting yet challenging research directions are pointed out
    • …
    corecore