937 research outputs found
Distribuição de conteúdos over-the-top multimédia em redes sem fios
mestrado em Engenharia Eletrónica e TelecomunicaçõesHoje em dia a Internet é considerada um bem essencial devido ao facto de
haver uma constante necessidade de comunicar, mas também de aceder e
partilhar conteúdos. Com a crescente utilização da Internet, aliada ao aumento
da largura de banda fornecida pelos operadores de telecomunicações,
criaram-se assim excelentes condições para o aumento dos serviços multimédia
Over-The-Top (OTT), demonstrado pelo o sucesso apresentado
pelos os serviços Netflix e Youtube.
O serviço OTT engloba a entrega de vídeo e áudio através da Internet sem
um controlo direto dos operadores de telecomunicações, apresentando uma
proposta atractiva de baixo custo e lucrativa.
Embora a entrega OTT seja cativante, esta padece de algumas limitações.
Para que a proposta se mantenha em crescimento e com elevados padrões de
Qualidade-de-Experiência (QoE) para os consumidores, é necessário investir
na arquitetura da rede de distribuição de conteúdos, para que esta seja capaz
de se adaptar aos diversos tipos de conteúdo e obter um modelo otimizado
com um uso cauteloso dos recursos, tendo como objectivo fornecer serviços
OTT com uma boa qualidade para o utilizador, de uma forma eficiente e
escalável indo de encontro aos requisitos impostos pelas redes móveis atuais
e futuras.
Esta dissertação foca-se na distribuição de conteúdos em redes sem fios,
através de um modelo de cache distribuída entre os diferentes pontos de
acesso, aumentando assim o tamanho da cache e diminuindo o tráfego
necessário para os servidores ou caches da camada de agregação acima.
Assim, permite-se uma maior escalabilidade e aumento da largura de banda
disponível para os servidores de camada de agregação acima. Testou-se
o modelo de cache distribuída em três cenários: o consumidor está em
casa em que se considera que tem um acesso fixo, o consumidor tem um
comportamento móvel entre vários pontos de acesso na rua, e o consumidor
está dentro de um comboio em alta velocidade.
Testaram-se várias soluções como Redis2, Cachelot e Memcached para servir
de cache, bem como se avaliaram vários proxies para ir de encontro ás características necessárias. Mais ainda, na distribuição de conteúdos testaram-se
dois algoritmos, nomeadamente o Consistent e o Rendezvouz Hashing.
Ainda nesta dissertação utilizou-se uma proposta já existente baseada na
previsão de conteúdos (prefetching ), que consiste em colocar o conteúdo
nas caches antes de este ser requerido pelos consumidores.
No final, verificou-se que o modelo distribuído com a integração com prefecthing
melhorou a qualidade de experiência dos consumidores, bem como
reduziu a carga nos servidores de camada de agregação acima.Nowadays, the Internet is considered an essential good, due to the fact that
there is a need to communicate, but also to access and share information.
With the increasing use of the Internet, allied with the increased bandwidth
provided by telecommunication operators, it has created conditions for the
increase of Over-the-Top (OTT) Multimedia Services, demonstrated by the
huge success of Net
ix and Youtube.
The OTT service encompasses the delivery of video and audio through the
Internet without direct control of telecommunication operators, presenting
an attractive low-cost and pro table proposal.
Although the OTT delivery is captivating, it has some limitations. In order
to increase the number of clients and keep the high Quality of Experience
(QoE) standards, an enhanced architecture for content distribution network
is needed. Thus, the enhanced architecture needs to provide a good quality
for the user, in an e cient and scalable way, supporting the requirements
imposed by future mobile networks.
This dissertation aims to approach the content distribution in wireless networks,
through a distributed cache model among the several access points,
thus increasing the cache size and decreasing the load on the upstream
servers. The proposed architecture was tested in three di erent scenarios:
the consumer is at home and it is considered that it has a xed access, the
consumer is mobile between several access points in the street, the consumer
is in a high speed train.
Several solutions were evaluated, such as Redis2, Cachelot and Memcached
to serve as caches, along with the evaluation of several proxies server in order
to ful ll the required features. Also, it was tested two distributed algorithms,
namely the Consistent and Rendezvous Hashing.
Moreover, in this dissertation it was integrated a prefetching mechanism,
which consists of inserting the content in caches before being requested by
the consumers.
At the end, it was veri ed that the distributed model with prefetching improved
the consumers QoE as well as it reduced the load on the upstream
servers
Building Internet caching systems for streaming media delivery
The proxy has been widely and successfully used to cache the static Web objects fetched by a client so that the subsequent clients requesting the same Web objects can be served directly from the proxy instead of other sources faraway, thus reducing the server\u27s load, the network traffic and the client response time. However, with the dramatic increase of streaming media objects emerging on the Internet, the existing proxy cannot efficiently deliver them due to their large sizes and client real time requirements.;In this dissertation, we design, implement, and evaluate cost-effective and high performance proxy-based Internet caching systems for streaming media delivery. Addressing the conflicting performance objectives for streaming media delivery, we first propose an efficient segment-based streaming media proxy system model. This model has guided us to design a practical streaming proxy, called Hyper-Proxy, aiming at delivering the streaming media data to clients with minimum playback jitter and a small startup latency, while achieving high caching performance. Second, we have implemented Hyper-Proxy by leveraging the existing Internet infrastructure. Hyper-Proxy enables the streaming service on the common Web servers. The evaluation of Hyper-Proxy on the global Internet environment and the local network environment shows it can provide satisfying streaming performance to clients while maintaining a good cache performance. Finally, to further improve the streaming delivery efficiency, we propose a group of the Shared Running Buffers (SRB) based proxy caching techniques to effectively utilize proxy\u27s memory. SRB algorithms can significantly reduce the media server/proxy\u27s load and network traffic and relieve the bottlenecks of the disk bandwidth and the network bandwidth.;The contributions of this dissertation are threefold: (1) we have studied several critical performance trade-offs and provided insights into Internet media content caching and delivery. Our understanding further leads us to establish an effective streaming system optimization model; (2) we have designed and evaluated several efficient algorithms to support Internet streaming content delivery, including segment caching, segment prefetching, and memory locality exploitation for streaming; (3) having addressed several system challenges, we have successfully implemented a real streaming proxy system and deployed it in a large industrial enterprise
Hadoop-Oriented SVM-LRU (H-SVM-LRU): An Intelligent Cache Replacement Algorithm to Improve MapReduce Performance
Modern applications can generate a large amount of data from different
sources with high velocity, a combination that is difficult to store and
process via traditional tools. Hadoop is one framework that is used for the
parallel processing of a large amount of data in a distributed environment,
however, various challenges can lead to poor performance. Two particular issues
that can limit performance are the high access time for I/O operations and the
recomputation of intermediate data. The combination of these two issues can
result in resource wastage. In recent years, there have been attempts to
overcome these problems by using caching mechanisms. Due to cache space
limitations, it is crucial to use this space efficiently and avoid cache
pollution (the cache contains data that is not used in the future). We propose
Hadoop-oriented SVM-LRU (HSVM- LRU) to improve Hadoop performance. For this
purpose, we use an intelligent cache replacement algorithm, SVM-LRU, that
combines the well-known LRU mechanism with a machine learning algorithm, SVM,
to classify cached data into two groups based on their future usage.
Experimental results show a significant decrease in execution time as a result
of an increased cache hit ratio, leading to a positive impact on Hadoop
performance
Study and analysis of mobility, security, and caching issues in CCN
Existing architecture of Internet is IP-centric, having capability to cope with the needs of the Internet users. Due to the recent advancements and emerging technologies, a need to have ubiquitous connectivity has become the primary focus. Increasing demands for location-independent content raised the requirement of a new architecture and hence it became a research challenge. Content Centric Networking (CCN) paradigm emerges as an alternative to IP-centric model and is based on name-based forwarding and in-network data caching. It is likely to address certain challenges that have not been solved by IP-based protocols in wireless networks. Three important factors that require significant research related to CCN are mobility, security, and caching. While a number of studies have been conducted on CCN and its proposed technologies, none of the studies target all three significant research directions in a single article, to the best of our knowledge. This paper is an attempt to discuss the three factors together within context of each other. In this paper, we discuss and analyze basics of CCN principles with distributed properties of caching, mobility, and secure access control. Different comparisons are made to examine the strengths and weaknesses of each aforementioned aspect in detail. The final discussion aims to identify the open research challenges and some future trends for CCN deployment on a large scale
- …