12 research outputs found

    Managing video objects in large peer-to-peer systems

    Get PDF
    In peer-to-peer video systems, most hosts will retain only a small portion of a video after its playback. This presents two challenges in managing video data in such systems: (1) how a host can find enough video pieces, which may scatter among the whole system, to assemble a complete video, and (2) given a limited buffer size, what part of a video a host should cache. In this thesis, we address these problems with a new distributive file management technique. In our scheme, we organize hosts into many cells, each of which is a distinct set of hosts which together can supply a video in its entirety. Because each cell is dynamically created and individually managed as an independent video supplier, our technique addresses the two problems, video lookup and caching, simultaneously. First, a client looking for a video can stop its search as soon as it finds a host that caches any part of the video. This dramatically reduces the search scope of a video lookup. Second, caching operations can now be coordinated within each cell to balance data redundancy in the system. We have implemented a Gnutella-like simulation network and use it as a testbed to evaluate the proposed technique. Our extensive study shows convincingly the performance advantage of the new scheme

    FAST rate allocation for JPEG2000 video transmission over time-varying channels

    Get PDF
    This work introduces a rate allocation method for the transmission of pre-encoded JPEG2000 video over timevarying channels, which vary their capacity during video transmission due to network congestion, hardware failures, or router saturation. Such variations occur often in networks and are commonly unpredictable in practice. The optimization problem is posed for such networks and a rate allocation method is formulated to handle such variations. The main insight of the proposed method is to extend the complexity scalability features of the FAst rate allocation through STeepest descent (FAST) algorithm. Extensive experimental results suggest that the proposed transmission scheme achieves near-optimal performance while expending few computational resources

    Video delivery technologies for large-scale deployment of multimedia applications

    Full text link

    Um sistema colaborativo de cache para stream de vídeos na internet

    Get PDF
    Monografia (graduação)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2015.Atualmente, uma parcela significativa do tráfego IP é proveniente de stream de vídeos. Um estudo realizado pela Cisco aponta que, em 2018, 79% de todo o tráfego IP irá transportar aplicações de stream de vídeos [13]. Como diversos usuários, dentro de uma rede local, podem possuir interesse nos mesmos vídeos, é gerada uma repetição desnecessária de requisições que saem da rede. O uso de cache mostra-se como uma abordagem para diminuir a quantidade de dados transferidos de forma repetida, diminuir a taxa de uso da largura de banda de saída da rede, aumentar o uso da infra-estrutura de rede interna e diminuir a latência na recuperação dos vídeos. Este trabalho apresenta os conceitos básicos de um sistema de cache e faz uma breve revisão da literatura com base nos trabalhos correlatos. Diferentemente dos modelos referenciados, que propõem uma abordagem centralizada, este trabalho propõe um modelo de cache colaborativo distribuído para stream de vídeos, que tem como objetivo reduzir o consumo da banda de saída da rede. Este modelo utiliza políticas colaborativas que permitem o compartilhamento do cache de cada usuário entre os demais membros da rede. Os resultados apresentados por este trabalho mostram que a implementação do modelo proposto é uma alternativa viável e eficaz para economizar a banda de saída da rede e melhorar a utilização de seus recursos. ____________________________________________________________________________ ABSTRACTCurrently, a significant portion of IP traffic comes from stream videos. A study by Cisco points out that in 2018, 79% of all IP traffic will be video stream applications [13]. As many users, inside a local network, may have interest in the same videos, an unnecessary repetition of outbound requests is made. Caching has proven itself as an approach to reduce the amount of repeated data, decreasing output bandwidth, increasing internal network infrastructure and reducing latency in video recovery. This work presents the basic concepts of a caching system and a brief literature review on the basis of related work. Unlike the referenced models that propose a centralized approach, here a collaborative cache distributed model to stream videos is proposed. This aims to reduce the amount of outgoing network bandwidth. This model uses collaborative policies that allow the cache sharing between all users of the network. The results presented in this work show that the implementation of the proposed model is a viable and effective alternative to save outgoing network bandwidth and improve resources use

    Building Internet caching systems for streaming media delivery

    Get PDF
    The proxy has been widely and successfully used to cache the static Web objects fetched by a client so that the subsequent clients requesting the same Web objects can be served directly from the proxy instead of other sources faraway, thus reducing the server\u27s load, the network traffic and the client response time. However, with the dramatic increase of streaming media objects emerging on the Internet, the existing proxy cannot efficiently deliver them due to their large sizes and client real time requirements.;In this dissertation, we design, implement, and evaluate cost-effective and high performance proxy-based Internet caching systems for streaming media delivery. Addressing the conflicting performance objectives for streaming media delivery, we first propose an efficient segment-based streaming media proxy system model. This model has guided us to design a practical streaming proxy, called Hyper-Proxy, aiming at delivering the streaming media data to clients with minimum playback jitter and a small startup latency, while achieving high caching performance. Second, we have implemented Hyper-Proxy by leveraging the existing Internet infrastructure. Hyper-Proxy enables the streaming service on the common Web servers. The evaluation of Hyper-Proxy on the global Internet environment and the local network environment shows it can provide satisfying streaming performance to clients while maintaining a good cache performance. Finally, to further improve the streaming delivery efficiency, we propose a group of the Shared Running Buffers (SRB) based proxy caching techniques to effectively utilize proxy\u27s memory. SRB algorithms can significantly reduce the media server/proxy\u27s load and network traffic and relieve the bottlenecks of the disk bandwidth and the network bandwidth.;The contributions of this dissertation are threefold: (1) we have studied several critical performance trade-offs and provided insights into Internet media content caching and delivery. Our understanding further leads us to establish an effective streaming system optimization model; (2) we have designed and evaluated several efficient algorithms to support Internet streaming content delivery, including segment caching, segment prefetching, and memory locality exploitation for streaming; (3) having addressed several system challenges, we have successfully implemented a real streaming proxy system and deployed it in a large industrial enterprise
    corecore