661 research outputs found
Adaptive Prefetching for Device-Independent File I/O
Device independent I/O has been a holy grail to operating system designers since the early days of UNIX. Unfortunately, existing operating systems fall short of this goal for multimedia applications. Techniques such as caching and sequential read-ahead can help mask I/O latency in some cases, but in others they increase latency and add substantial jitter. Multimedia applications, such as video players, are sensitive to vagaries in performance since I/O latency and jitter affect the quality of presentation. Our solution uses adaptive prefetching to reduce both latency and jitter. Applications submit file access plans to the prefetcher, which then generates I/O requests to the operating system and manages the buffer cache to isolate the application from variations in device performance. Our experiments show device independence can be achieved: an MPEG video player sees the same latency when reading from a local disk or an NFS server. Moreover, our approach reduces jitter substantially
Hierarchical clustered register file organization for VLIW processors
Technology projections indicate that wire delays will become one of the biggest constraints in future microprocessor designs. To avoid long wire delays and therefore long cycle times, processor cores must be partitioned into components so that most of the communication is done locally. In this paper, we propose a novel register file organization for VLIW cores that combines clustering with a hierarchical register file organization. Functional units are organized in clusters, each one with a local first level register file. The local register files are connected to a global second level register file, which provides access to memory. All intercluster communications are done through the second level register file. This paper also proposes MIRS-HC, a novel modulo scheduling technique that simultaneously performs instruction scheduling, cluster selection, inserts communication operations, performs register allocation and spill insertion for the proposed organization. The results show that although more cycles are required to execute applications, the execution time is reduced due to a shorter cycle time. In addition, the combination of clustering and hierarchy provides a larger design exploration space that trades-off performance and technology requirements.Peer ReviewedPostprint (published version
QoE over-the-top multimédia em redes sem fios
One of the goals of an operator is to improve the Quality of Experience (QoE) of a client in networks where Over-the-top (OTT) content is being delivered. The appearance of services like YouTube, Netflix or Twitch, where in the first case it contains more than 300 hours of video per minute in the platform, brings issues to the managed data networks that already exist, as well as challenges to fix them. Video traffic corresponds to 75% of the whole transmitted data on the Internet. This way, not only the Internet did become the ’de facto’ video transmission path, but also the general data traffic continues to exponentially
increase, due to the desire to consume more content. This thesis presents two model proposals and architecture that aim to improve the users’ quality of experience, by predicting the amount of video in advance liable of being prefetched, as a way to optimize the delivery efficiency where the quality of service cannot be guaranteed. The prefetch is done in the clients’ closest cache server. For that, an Analytic Hierarchy Process (AHP) is used, where through a subjective method of attribute comparison, and from the application of a weighted function on the measured quality of service metrics, the amount of prefetch is achieved. Besides this method, artificial intelligence techniques are also taken into account. With neural networks, there is an attempt of selflearning with the behavior of OTT networks with more than 14.000 hours of video consumption under different quality conditions, to try to estimate the experience
felt and maximize it, without the normal service delivery degradation. At last, both methods are evaluated and a proof of concept is made with users in a high speed train.Um dos objetivos de um operador é melhorar a qualidade de experiência do cliente em redes onde existem conteúdos Over-the-top (OTT) a serem entregues. O aparecimento de serviços como o YouTube, Netflix ou Twitch, onde no primeiro caso são carregadas mais de 300 horas de vídeo por minuto na plataforma, vem trazer problemas às redes de dados geridas que já existiam, assim como desafios para os resolver. O tráfego de vídeo corresponde a 75% de todos os dados transmitidos na Internet. Assim, não só a Internet se tornou o meio de transmissão de vídeo ’de facto’, como o tráfego de dados em geral continua a crescer exponencialmente, proveniente do desejo de consumir mais conteúdos. Esta tese apresenta duas propostas de modelos e arquitetura que pretendem melhorar a qualidade de experiência do utilizador, ao prever a quantidade de vídeo em avanço passível de ser précarregado, de forma a optimizar a eficiência de entrega das redes onde a qualidade de serviço não é possível de ser garantida. O pré-carregamento dos conteúdos é feito no servidor de cache mais próximo do cliente. Para tal, é utilizado um processo analítico hierárquico (AHP), onde através de um método subjetivo de comparação de atributos, e da aplicação de uma função de valores ponderados nas medições das métricas de qualidade de serviço, é obtida a quantidade a pré-carregar. Além deste método, é também proposta uma abordagem com técnicas de inteligência artificial. Através de redes neurais, há uma tentativa de auto-aprendizagem do comportamento das redes OTT com mais de 14.000 horas de consumo de vídeo sobre diferentes condições de qualidade, para se tentar estimar a experiência sentida e maximizar a mesma, sem degradação da entrega de serviço normal. No final, ambos os métodos propostos são avaliados num cenário de utilizadores num comboio a alta velocidade.Mestrado em Engenharia de Computadores e Telemátic
Performance Improvement Studies in Accessing Web Documents
As the World Wide Web has now become the standard interface for interactive information services over the Internet, the perceived latency in WWW
interaction is becoming an important and crucial issue. Currently, Web users often
experience response delay of several seconds or even longer to non-local Web sites
especially when the pages they attempt to access are very popular. For WWW to be
acceptable for general daily use, the response delay must be reduced.
The potential solutions to the problem lie in the extensive use of caching (disk
based) and prefetching in WWW. Both caching and prefetching explore the patterns
and knowledge in the Web accesses. This thesis describes and tests the efficiency of a batch prefetching update
(refreshing) in accessing HTTP and FTP documents on the global Internet. The
update is scheduled to run at idle time when the traffic is less congested and the
server activity is low. The batch refreshing effort would be fruitful when the
refreshed documents are really requested before they tum stale again. The
effectiveness of the batch refreshing is verified by running a statistical analysis of the
access log files.
In the first part of the study, a Proxy Server at the LAN of FTMSK, ITM was
set-up, configured and monitored for the use of 400 users. Access log files are
collected and analysed for a period of six months. The analysis result would be a
benchmark for the caching proxy with batch refreshing in the second part of the
work
Distribuição de conteúdos over-the-top multimédia em redes sem fios
mestrado em Engenharia Eletrónica e TelecomunicaçõesHoje em dia a Internet é considerada um bem essencial devido ao facto de
haver uma constante necessidade de comunicar, mas também de aceder e
partilhar conteúdos. Com a crescente utilização da Internet, aliada ao aumento
da largura de banda fornecida pelos operadores de telecomunicações,
criaram-se assim excelentes condições para o aumento dos serviços multimédia
Over-The-Top (OTT), demonstrado pelo o sucesso apresentado
pelos os serviços Netflix e Youtube.
O serviço OTT engloba a entrega de vídeo e áudio através da Internet sem
um controlo direto dos operadores de telecomunicações, apresentando uma
proposta atractiva de baixo custo e lucrativa.
Embora a entrega OTT seja cativante, esta padece de algumas limitações.
Para que a proposta se mantenha em crescimento e com elevados padrões de
Qualidade-de-Experiência (QoE) para os consumidores, é necessário investir
na arquitetura da rede de distribuição de conteúdos, para que esta seja capaz
de se adaptar aos diversos tipos de conteúdo e obter um modelo otimizado
com um uso cauteloso dos recursos, tendo como objectivo fornecer serviços
OTT com uma boa qualidade para o utilizador, de uma forma eficiente e
escalável indo de encontro aos requisitos impostos pelas redes móveis atuais
e futuras.
Esta dissertação foca-se na distribuição de conteúdos em redes sem fios,
através de um modelo de cache distribuída entre os diferentes pontos de
acesso, aumentando assim o tamanho da cache e diminuindo o tráfego
necessário para os servidores ou caches da camada de agregação acima.
Assim, permite-se uma maior escalabilidade e aumento da largura de banda
disponível para os servidores de camada de agregação acima. Testou-se
o modelo de cache distribuída em três cenários: o consumidor está em
casa em que se considera que tem um acesso fixo, o consumidor tem um
comportamento móvel entre vários pontos de acesso na rua, e o consumidor
está dentro de um comboio em alta velocidade.
Testaram-se várias soluções como Redis2, Cachelot e Memcached para servir
de cache, bem como se avaliaram vários proxies para ir de encontro ás características necessárias. Mais ainda, na distribuição de conteúdos testaram-se
dois algoritmos, nomeadamente o Consistent e o Rendezvouz Hashing.
Ainda nesta dissertação utilizou-se uma proposta já existente baseada na
previsão de conteúdos (prefetching ), que consiste em colocar o conteúdo
nas caches antes de este ser requerido pelos consumidores.
No final, verificou-se que o modelo distribuído com a integração com prefecthing
melhorou a qualidade de experiência dos consumidores, bem como
reduziu a carga nos servidores de camada de agregação acima.Nowadays, the Internet is considered an essential good, due to the fact that
there is a need to communicate, but also to access and share information.
With the increasing use of the Internet, allied with the increased bandwidth
provided by telecommunication operators, it has created conditions for the
increase of Over-the-Top (OTT) Multimedia Services, demonstrated by the
huge success of Net
ix and Youtube.
The OTT service encompasses the delivery of video and audio through the
Internet without direct control of telecommunication operators, presenting
an attractive low-cost and pro table proposal.
Although the OTT delivery is captivating, it has some limitations. In order
to increase the number of clients and keep the high Quality of Experience
(QoE) standards, an enhanced architecture for content distribution network
is needed. Thus, the enhanced architecture needs to provide a good quality
for the user, in an e cient and scalable way, supporting the requirements
imposed by future mobile networks.
This dissertation aims to approach the content distribution in wireless networks,
through a distributed cache model among the several access points,
thus increasing the cache size and decreasing the load on the upstream
servers. The proposed architecture was tested in three di erent scenarios:
the consumer is at home and it is considered that it has a xed access, the
consumer is mobile between several access points in the street, the consumer
is in a high speed train.
Several solutions were evaluated, such as Redis2, Cachelot and Memcached
to serve as caches, along with the evaluation of several proxies server in order
to ful ll the required features. Also, it was tested two distributed algorithms,
namely the Consistent and Rendezvous Hashing.
Moreover, in this dissertation it was integrated a prefetching mechanism,
which consists of inserting the content in caches before being requested by
the consumers.
At the end, it was veri ed that the distributed model with prefetching improved
the consumers QoE as well as it reduced the load on the upstream
servers
Prefetching techniques for client server object-oriented database systems
The performance of many object-oriented database applications suffers from the page fetch latency which is determined by the expense of disk access. In this work we suggest several prefetching techniques to avoid, or at least to reduce, page fetch latency. In practice no prediction technique is perfect and no prefetching technique can entirely eliminate delay due to page fetch latency. Therefore we are interested in the trade-off between the level of accuracy required for obtaining good results in terms of elapsed time reduction and the processing overhead needed to achieve this level of accuracy. If prefetching accuracy is high then the total elapsed time of an application can be reduced significantly otherwise if the prefetching accuracy is low, many incorrect pages are prefetched and the extra load on the client, network, server and disks decreases the whole system performance. Access pattern of object-oriented databases are often complex and usually hard to predict accurately. The ..
- …