266 research outputs found
The K-Server Dual and Loose Competitiveness for Paging
This paper has two results. The first is based on the surprising observation
that the well-known ``least-recently-used'' paging algorithm and the
``balance'' algorithm for weighted caching are linear-programming primal-dual
algorithms. This observation leads to a strategy (called ``Greedy-Dual'') that
generalizes them both and has an optimal performance guarantee for weighted
caching.
For the second result, the paper presents empirical studies of paging
algorithms, documenting that in practice, on ``typical'' cache sizes and
sequences, the performance of paging strategies are much better than their
worst-case analyses in the standard model suggest. The paper then presents
theoretical results that support and explain this. For example: on any input
sequence, with almost all cache sizes, either the performance guarantee of
least-recently-used is O(log k) or the fault rate (in an absolute sense) is
insignificant.
Both of these results are strengthened and generalized in``On-line File
Caching'' (1998).Comment: conference version: "On-Line Caching as Cache Size Varies", SODA
(1991
A Deep Reinforcement Learning-Based Framework for Content Caching
Content caching at the edge nodes is a promising technique to reduce the data
traffic in next-generation wireless networks. Inspired by the success of Deep
Reinforcement Learning (DRL) in solving complicated control problems, this work
presents a DRL-based framework with Wolpertinger architecture for content
caching at the base station. The proposed framework is aimed at maximizing the
long-term cache hit rate, and it requires no knowledge of the content
popularity distribution. To evaluate the proposed framework, we compare the
performance with other caching algorithms, including Least Recently Used (LRU),
Least Frequently Used (LFU), and First-In First-Out (FIFO) caching strategies.
Meanwhile, since the Wolpertinger architecture can effectively limit the action
space size, we also compare the performance with Deep Q-Network to identify the
impact of dropping a portion of the actions. Our results show that the proposed
framework can achieve improved short-term cache hit rate and improved and
stable long-term cache hit rate in comparison with LRU, LFU, and FIFO schemes.
Additionally, the performance is shown to be competitive in comparison to Deep
Q-learning, while the proposed framework can provide significant savings in
runtime.Comment: 6 pages, 3 figure
On-Line File Caching
In the on-line file-caching problem problem, the input is a sequence of
requests for files, given on-line (one at a time). Each file has a non-negative
size and a non-negative retrieval cost. The problem is to decide which files to
keep in a fixed-size cache so as to minimize the sum of the retrieval costs for
files that are not in the cache when requested. The problem arises in web
caching by browsers and by proxies. This paper describes a natural
generalization of LRU called Landlord and gives an analysis showing that it has
an optimal performance guarantee (among deterministic on-line algorithms).
The paper also gives an analysis of the algorithm in a so-called ``loosely''
competitive model, showing that on a ``typical'' cache size, either the
performance guarantee is O(1) or the total retrieval cost is insignificant.Comment: ACM-SIAM Symposium on Discrete Algorithms (1998
A Targeted Denial of Service Attack on Data Caching Networks
With the rise of data exchange over the Internet, information-centric networks have become a popular research topic in computing. One major research topic on Information Centric Networks (ICN) is the use of data caching to increase network performance. However, research in the security concerns of data caching networks is lacking. One example of a data caching network can be seen using a Mobile Ad Hoc Network (MANET).
Recently, a study has shown that it is possible to infer military activity through cache behavior which is used as a basis for a formulated denial of service attack (DoS) that can be used to attack networks using data caching. Current security issues with data caching networks are discussed, including possible prevention techniques and methods. A targeted data cache DoS attack is developed and tested using an ICN as a simulator. The goal of the attacker would be to fill node caches with unpopular content, thus making the cache useless. The attack would consist of a malicious node that requests unpopular content in intervals of time where the content would have been just purged from the existing cache. The goal of the attack would be to corrupt as many nodes as possible without increasing the chance of detection. The decreased network throughput and increased delay would also lead to higher power consumption on the mobile nodes, thus increasing the effects of the DoS attack.
Various caching polices are evaluated in an ICN simulator program designed to show network performance using three common caching policies and various cache sizes. The ICN simulator is developed using Java and tested on a simulated network. Baseline data are collected and then compared to data collected after the attack. Other possible security concerns with data caching networks are also discussed, including possible smarter attack techniques and methods
An approximate analysis of heterogeneous and general cache networks
In this paper, we propose approximate models to assess the performance of a cache network with arbitrary topology where nodes run the Least Recently Used (LRU), First-In First-Out (FIFO), or Random (RND) replacement policies on arbitrary size caches. Our model takes advantage of the notions of cache characteristic time and Time-To-Live (TTL)-based cache to develop a unified framework for approximating metrics of interest of interconnected caches. Our approach is validated through event-driven simulations; and when possible, compared to the existing a-NET model [23].Dans ce travail, nous proposons des modèles approximatifs pour évaluer les performances d'un réseau de caches ayant une topologie arbitraire où les noeuds exécutent les politiques Least Recently Used (LRU), First In First Out (FIFO), ou Random replacement (RND) sur des caches de taille quelconque. Notre modèle tire parti des notions de temps caractéristique d'un cache et des modèles Time-To-Live (TTL) de cache pour développer une approche unifiée pour l'approximation des métriques de performance sur des caches interconnectés. Notre approche est validée par des simulations événementielles; et, si possible, comparée au modèle existant a-NET [23]
Distribuição de conteúdos over-the-top multimédia em redes sem fios
mestrado em Engenharia Eletrónica e TelecomunicaçõesHoje em dia a Internet é considerada um bem essencial devido ao facto de
haver uma constante necessidade de comunicar, mas também de aceder e
partilhar conteúdos. Com a crescente utilização da Internet, aliada ao aumento
da largura de banda fornecida pelos operadores de telecomunicações,
criaram-se assim excelentes condições para o aumento dos serviços multimédia
Over-The-Top (OTT), demonstrado pelo o sucesso apresentado
pelos os serviços Netflix e Youtube.
O serviço OTT engloba a entrega de vídeo e áudio através da Internet sem
um controlo direto dos operadores de telecomunicações, apresentando uma
proposta atractiva de baixo custo e lucrativa.
Embora a entrega OTT seja cativante, esta padece de algumas limitações.
Para que a proposta se mantenha em crescimento e com elevados padrões de
Qualidade-de-Experiência (QoE) para os consumidores, é necessário investir
na arquitetura da rede de distribuição de conteúdos, para que esta seja capaz
de se adaptar aos diversos tipos de conteúdo e obter um modelo otimizado
com um uso cauteloso dos recursos, tendo como objectivo fornecer serviços
OTT com uma boa qualidade para o utilizador, de uma forma eficiente e
escalável indo de encontro aos requisitos impostos pelas redes móveis atuais
e futuras.
Esta dissertação foca-se na distribuição de conteúdos em redes sem fios,
através de um modelo de cache distribuída entre os diferentes pontos de
acesso, aumentando assim o tamanho da cache e diminuindo o tráfego
necessário para os servidores ou caches da camada de agregação acima.
Assim, permite-se uma maior escalabilidade e aumento da largura de banda
disponível para os servidores de camada de agregação acima. Testou-se
o modelo de cache distribuída em três cenários: o consumidor está em
casa em que se considera que tem um acesso fixo, o consumidor tem um
comportamento móvel entre vários pontos de acesso na rua, e o consumidor
está dentro de um comboio em alta velocidade.
Testaram-se várias soluções como Redis2, Cachelot e Memcached para servir
de cache, bem como se avaliaram vários proxies para ir de encontro ás características necessárias. Mais ainda, na distribuição de conteúdos testaram-se
dois algoritmos, nomeadamente o Consistent e o Rendezvouz Hashing.
Ainda nesta dissertação utilizou-se uma proposta já existente baseada na
previsão de conteúdos (prefetching ), que consiste em colocar o conteúdo
nas caches antes de este ser requerido pelos consumidores.
No final, verificou-se que o modelo distribuído com a integração com prefecthing
melhorou a qualidade de experiência dos consumidores, bem como
reduziu a carga nos servidores de camada de agregação acima.Nowadays, the Internet is considered an essential good, due to the fact that
there is a need to communicate, but also to access and share information.
With the increasing use of the Internet, allied with the increased bandwidth
provided by telecommunication operators, it has created conditions for the
increase of Over-the-Top (OTT) Multimedia Services, demonstrated by the
huge success of Net
ix and Youtube.
The OTT service encompasses the delivery of video and audio through the
Internet without direct control of telecommunication operators, presenting
an attractive low-cost and pro table proposal.
Although the OTT delivery is captivating, it has some limitations. In order
to increase the number of clients and keep the high Quality of Experience
(QoE) standards, an enhanced architecture for content distribution network
is needed. Thus, the enhanced architecture needs to provide a good quality
for the user, in an e cient and scalable way, supporting the requirements
imposed by future mobile networks.
This dissertation aims to approach the content distribution in wireless networks,
through a distributed cache model among the several access points,
thus increasing the cache size and decreasing the load on the upstream
servers. The proposed architecture was tested in three di erent scenarios:
the consumer is at home and it is considered that it has a xed access, the
consumer is mobile between several access points in the street, the consumer
is in a high speed train.
Several solutions were evaluated, such as Redis2, Cachelot and Memcached
to serve as caches, along with the evaluation of several proxies server in order
to ful ll the required features. Also, it was tested two distributed algorithms,
namely the Consistent and Rendezvous Hashing.
Moreover, in this dissertation it was integrated a prefetching mechanism,
which consists of inserting the content in caches before being requested by
the consumers.
At the end, it was veri ed that the distributed model with prefetching improved
the consumers QoE as well as it reduced the load on the upstream
servers
- …