582 research outputs found

    MapReduce analysis for cloud-archived data

    Get PDF
    Public storage clouds have become a popular choice for archiving certain classes of enterprise data - for example, application and infrastructure logs. These logs contain sensitive information like IP addresses or user logins due to which regulatory and security requirements often require data to be encrypted before moved to the cloud. In order to leverage such data for any business value, analytics systems (e.g. Hadoop/MapReduce) first download data from these public clouds, decrypt it and then process it at the secure enterprise site. We propose VNCache: an efficient solution for MapReduceanalysis of such cloud-archived log data without requiring an apriori data transfer and loading into the local Hadoop cluster. VNcache dynamically integrates cloud-archived data into a virtual namespace at the enterprise Hadoop cluster. Through a seamless data streaming and prefetching model, Hadoop jobs can begin execution as soon as they are launched without requiring any apriori downloading. With VNcache's accurate pre-fetching and caching, jobs often run on a local cached copy of the data block significantly improving performance. When no longer needed, data is safely evicted from the enterprise cluster reducing the total storage footprint. Uniquely, VNcache is implemented with NO changes to the Hadoop application stack. © 2014 IEEE

    QoE over-the-top multimédia em redes sem fios

    Get PDF
    One of the goals of an operator is to improve the Quality of Experience (QoE) of a client in networks where Over-the-top (OTT) content is being delivered. The appearance of services like YouTube, Netflix or Twitch, where in the first case it contains more than 300 hours of video per minute in the platform, brings issues to the managed data networks that already exist, as well as challenges to fix them. Video traffic corresponds to 75% of the whole transmitted data on the Internet. This way, not only the Internet did become the ’de facto’ video transmission path, but also the general data traffic continues to exponentially increase, due to the desire to consume more content. This thesis presents two model proposals and architecture that aim to improve the users’ quality of experience, by predicting the amount of video in advance liable of being prefetched, as a way to optimize the delivery efficiency where the quality of service cannot be guaranteed. The prefetch is done in the clients’ closest cache server. For that, an Analytic Hierarchy Process (AHP) is used, where through a subjective method of attribute comparison, and from the application of a weighted function on the measured quality of service metrics, the amount of prefetch is achieved. Besides this method, artificial intelligence techniques are also taken into account. With neural networks, there is an attempt of selflearning with the behavior of OTT networks with more than 14.000 hours of video consumption under different quality conditions, to try to estimate the experience felt and maximize it, without the normal service delivery degradation. At last, both methods are evaluated and a proof of concept is made with users in a high speed train.Um dos objetivos de um operador é melhorar a qualidade de experiência do cliente em redes onde existem conteúdos Over-the-top (OTT) a serem entregues. O aparecimento de serviços como o YouTube, Netflix ou Twitch, onde no primeiro caso são carregadas mais de 300 horas de vídeo por minuto na plataforma, vem trazer problemas às redes de dados geridas que já existiam, assim como desafios para os resolver. O tráfego de vídeo corresponde a 75% de todos os dados transmitidos na Internet. Assim, não só a Internet se tornou o meio de transmissão de vídeo ’de facto’, como o tráfego de dados em geral continua a crescer exponencialmente, proveniente do desejo de consumir mais conteúdos. Esta tese apresenta duas propostas de modelos e arquitetura que pretendem melhorar a qualidade de experiência do utilizador, ao prever a quantidade de vídeo em avanço passível de ser précarregado, de forma a optimizar a eficiência de entrega das redes onde a qualidade de serviço não é possível de ser garantida. O pré-carregamento dos conteúdos é feito no servidor de cache mais próximo do cliente. Para tal, é utilizado um processo analítico hierárquico (AHP), onde através de um método subjetivo de comparação de atributos, e da aplicação de uma função de valores ponderados nas medições das métricas de qualidade de serviço, é obtida a quantidade a pré-carregar. Além deste método, é também proposta uma abordagem com técnicas de inteligência artificial. Através de redes neurais, há uma tentativa de auto-aprendizagem do comportamento das redes OTT com mais de 14.000 horas de consumo de vídeo sobre diferentes condições de qualidade, para se tentar estimar a experiência sentida e maximizar a mesma, sem degradação da entrega de serviço normal. No final, ambos os métodos propostos são avaliados num cenário de utilizadores num comboio a alta velocidade.Mestrado em Engenharia de Computadores e Telemátic

    Transmission adaptative de modèles 3D massifs

    Get PDF
    Avec les progrès de l'édition de modèles 3D et des techniques de reconstruction 3D, de plus en plus de modèles 3D sont disponibles et leur qualité augmente. De plus, le support de la visualisation 3D sur le web s'est standardisé ces dernières années. Un défi majeur est donc de transmettre des modèles massifs à distance et de permettre aux utilisateurs de visualiser et de naviguer dans ces environnements virtuels. Cette thèse porte sur la transmission et l'interaction de contenus 3D et propose trois contributions majeures. Tout d'abord, nous développons une interface de navigation dans une scène 3D avec des signets -- de petits objets virtuels ajoutés à la scène sur lesquels l'utilisateur peut cliquer pour atteindre facilement un emplacement recommandé. Nous décrivons une étude d'utilisateurs où les participants naviguent dans des scènes 3D avec ou sans signets. Nous montrons que les utilisateurs naviguent (et accomplissent une tâche donnée) plus rapidement en utilisant des signets. Cependant, cette navigation plus rapide a un inconvénient sur les performances de la transmission : un utilisateur qui se déplace plus rapidement dans une scène a besoin de capacités de transmission plus élevées afin de bénéficier de la même qualité de service. Cet inconvénient peut être atténué par le fait que les positions des signets sont connues à l'avance : en ordonnant les faces du modèle 3D en fonction de leur visibilité depuis un signet, on optimise la transmission et donc, on diminue la latence lorsque les utilisateurs cliquent sur les signets. Deuxièmement, nous proposons une adaptation du standard de transmission DASH (Dynamic Adaptive Streaming over HTTP), très utilisé en vidéo, à la transmission de maillages texturés 3D. Pour ce faire, nous divisons la scène en un arbre k-d où chaque cellule correspond à un adaptation set DASH. Chaque cellule est en outre divisée en segments DASH d'un nombre fixe de faces, regroupant des faces de surfaces comparables. Chaque texture est indexée dans son propre adaptation set à différentes résolutions. Toutes les métadonnées (les cellules de l'arbre k-d, les résolutions des textures, etc.) sont référencées dans un fichier XML utilisé par DASH pour indexer le contenu: le MPD (Media Presentation Description). Ainsi, notre framework hérite de la scalabilité offerte par DASH. Nous proposons ensuite des algorithmes capables d'évaluer l'utilité de chaque segment de données en fonction du point de vue du client, et des politiques de transmission qui décident des segments à télécharger. Enfin, nous étudions la mise en place de la transmission et de la navigation 3D sur les appareils mobiles. Nous intégrons des signets dans notre version 3D de DASH et proposons une version améliorée de notre client DASH qui bénéficie des signets. Une étude sur les utilisateurs montre qu'avec notre politique de chargement adaptée aux signets, les signets sont plus susceptibles d'être cliqués, ce qui améliore à la fois la qualité de service et la qualité d'expérience des utilisateur

    Quality of experience-centric management of adaptive video streaming services : status and challenges

    Get PDF
    Video streaming applications currently dominate Internet traffic. Particularly, HTTP Adaptive Streaming ( HAS) has emerged as the dominant standard for streaming videos over the best-effort Internet, thanks to its capability of matching the video quality to the available network resources. In HAS, the video client is equipped with a heuristic that dynamically decides the most suitable quality to stream the content, based on information such as the perceived network bandwidth or the video player buffer status. The goal of this heuristic is to optimize the quality as perceived by the user, the so-called Quality of Experience (QoE). Despite the many advantages brought by the adaptive streaming principle, optimizing users' QoE is far from trivial. Current heuristics are still suboptimal when sudden bandwidth drops occur, especially in wireless environments, thus leading to freezes in the video playout, the main factor influencing users' QoE. This issue is aggravated in case of live events, where the player buffer has to be kept as small as possible in order to reduce the playout delay between the user and the live signal. In light of the above, in recent years, several works have been proposed with the aim of extending the classical purely client-based structure of adaptive video streaming, in order to fully optimize users' QoE. In this article, a survey is presented of research works on this topic together with a classification based on where the optimization takes place. This classification goes beyond client-based heuristics to investigate the usage of server-and network-assisted architectures and of new application and transport layer protocols. In addition, we outline the major challenges currently arising in the field of multimedia delivery, which are going to be of extreme relevance in future years

    Optimizing Hypervideo Navigation Using a Markov Decision Process Approach

    Get PDF
    Interaction with hypermedia documents is a required feature for new sophisticated yet flexible multimedia applications. This paper presents an innovative adaptive technique to stream hypervideo that takes into account user behaviour. The objective is to optimize hypervideo prefetching in order to reduce the latency caused by the network. This technique is based on a model provided by a Markov Decision Process approach. The problem is solved using two methods: classical stochastic dynamic programming algorithms and reinforcement learning. Experimental results under stochastic network conditions are very promising

    Mitigating Interference During Virtual Machine Live Migration through Storage Offloading

    Get PDF
    Today\u27s cloud landscape has evolved computing infrastructure into a dynamic, high utilization, service-oriented paradigm. This shift has enabled the commoditization of large-scale storage and distributed computation, allowing engineers to tackle previously untenable problems without large upfront investment. A key enabler of flexibility in the cloud is the ability to transfer running virtual machines across subnets or even datacenters using live migration. However, live migration can be a costly process, one that has the potential to interfere with other applications not involved with the migration. This work investigates storage interference through experimentation with real-world systems and well-established benchmarks. In order to address migration interference in general, a buffering technique is presented that offloads the migration\u27s read, eliminating interference in the majority of scenarios

    Improved Designs for Application Virtualization

    Get PDF
    We propose solutions for application virtualization to mitigate the performance loss in streaming and browser-based applications. For the application streaming, we propose a solution which keeps operating system components and application software at the server and streams them to the client side for execution. This architecture minimizes the components managed at the clients and improves the platform-level incompatibility. The runtime performance of application streaming is significantly reduced when the required code is not properly available on the client side. To mitigate this issue and boost the runtime performance, we propose prefetching, i.e., speculatively delivering code blocks to the clients in advance. The probability model on which our prefetch method is based may be very large. To manage such a probability model and the associated hardware resources, we perform an information gain analysis. We draw two lower bounds of the information gain brought by an attribute set required to achieve a prefetch hit rate. We organize the probability model as a look-up table: LUT). Similar to the memory hierarchy which is widely used in the computing field, we separate the single LUT into two-level, hierarchical LUTs. To separate the entries without sorting all entries, we propose an entropy-based fast LUT separation algorithm which utilizes the entropy as an indicator. Since the domain of the attribute can be much larger than the addressable space of a virtual memory system, we need an efficient way to allocate each LUT\u27s entry in a limited memory address space. Instead of using expensive CAM, we use a hash function to convert the attribute values into addresses. We propose an improved version of the Pearson hashing to reduce the collision rate with little extra complexity. Long interactive delays due to network delays are a significant drawback for the browser-based application virtualization. To address this, we propose a distributed infrastructure arrangement for browser-based application virtualization which reduces the average communication distance among servers and clients. We investigate a hand-off protocol to deal with the user mobility in the browser-based application virtualization. Analyses and simulations for information-based prefetching and for mobile applications are provided to quantify the benefits of the proposed solutions

    GPGPU microbenchmarking for irregular application optimization

    Get PDF
    Irregular applications, such as unstructured mesh operations, do not easily map onto the typical GPU programming paradigms endorsed by GPU manufacturers, which mostly focus on maximizing concurrency for latency hiding. In this work, we show how alternative techniques focused on latency amortization can be used to control overall latency while requiring less concurrency. We used a custom-built microbenchmarking framework to test several GPU kernels and show how the GPU behaves under relevant workloads. We demonstrate that coalescing is not required for efficacious performance; an uncoalesced access pattern can achieve high bandwidth - even over 80% of the theoretical global memory bandwidth in certain circumstances. We also make other further observations on specific relevant behaviors of GPUs. We hope that this study opens the door for further investigation into techniques that can exploit latency amortization when latency hiding does not achieve sufficient performance
    • …
    corecore