821 research outputs found

    User-Centric Quality of Service Provisioning in IP Networks

    Get PDF
    The Internet has become the preferred transport medium for almost every type of communication, continuing to grow, both in terms of the number of users and delivered services. Efforts have been made to ensure that time sensitive applications receive sufficient resources and subsequently receive an acceptable Quality of Service (QoS). However, typical Internet users no longer use a single service at a given point in time, as they are instead engaged in a multimedia-rich experience, comprising of many different concurrent services. Given the scalability problems raised by the diversity of the users and traffic, in conjunction with their increasing expectations, the task of QoS provisioning can no longer be approached from the perspective of providing priority to specific traffic types over coexisting services; either through explicit resource reservation, or traffic classification using static policies, as is the case with the current approach to QoS provisioning, Differentiated Services (Diffserv). This current use of static resource allocation and traffic shaping methods reveals a distinct lack of synergy between current QoS practices and user activities, thus highlighting a need for a QoS solution reflecting the user services. The aim of this thesis is to investigate and propose a novel QoS architecture, which considers the activities of the user and manages resources from a user-centric perspective. The research begins with a comprehensive examination of existing QoS technologies and mechanisms, arguing that current QoS practises are too static in their configuration and typically give priority to specific individual services rather than considering the user experience. The analysis also reveals the potential threat that unresponsive application traffic presents to coexisting Internet services and QoS efforts, and introduces the requirement for a balance between application QoS and fairness. This thesis proposes a novel architecture, the Congestion Aware Packet Scheduler (CAPS), which manages and controls traffic at the point of service aggregation, in order to optimise the overall QoS of the user experience. The CAPS architecture, in contrast to traditional QoS alternatives, places no predetermined precedence on a specific traffic; instead, it adapts QoS policies to each individual’s Internet traffic profile and dynamically controls the ratio of user services to maintain an optimised QoS experience. The rationale behind this approach was to enable a QoS optimised experience to each Internet user and not just those using preferred services. Furthermore, unresponsive bandwidth intensive applications, such as Peer-to-Peer, are managed fairly while minimising their impact on coexisting services. The CAPS architecture has been validated through extensive simulations with the topologies used replicating the complexity and scale of real-network ISP infrastructures. The results show that for a number of different user-traffic profiles, the proposed approach achieves an improved aggregate QoS for each user when compared with Best effort Internet, Traditional Diffserv and Weighted-RED configurations. Furthermore, the results demonstrate that the proposed architecture not only provides an optimised QoS to the user, irrespective of their traffic profile, but through the avoidance of static resource allocation, can adapt with the Internet user as their use of services change.France Teleco

    Network coding meets multimedia: a review

    Get PDF
    While every network node only relays messages in a traditional communication system, the recent network coding (NC) paradigm proposes to implement simple in-network processing with packet combinations in the nodes. NC extends the concept of "encoding" a message beyond source coding (for compression) and channel coding (for protection against errors and losses). It has been shown to increase network throughput compared to traditional networks implementation, to reduce delay and to provide robustness to transmission errors and network dynamics. These features are so appealing for multimedia applications that they have spurred a large research effort towards the development of multimedia-specific NC techniques. This paper reviews the recent work in NC for multimedia applications and focuses on the techniques that fill the gap between NC theory and practical applications. It outlines the benefits of NC and presents the open challenges in this area. The paper initially focuses on multimedia-specific aspects of network coding, in particular delay, in-network error control, and mediaspecific error control. These aspects permit to handle varying network conditions as well as client heterogeneity, which are critical to the design and deployment of multimedia systems. After introducing these general concepts, the paper reviews in detail two applications that lend themselves naturally to NC via the cooperation and broadcast models, namely peer-to-peer multimedia streaming and wireless networkin

    Experimental comparison of neighborhood filtering strategies in unstructured P2P-TV systems

    Get PDF
    P2P-TV systems performance are driven by the overlay topology that peers form. Several proposals have been made in the past to optimize it, yet little experimental studies have corroborated results. The aim of this work is to provide a comprehensive experimental comparison of different strategies for the construction and maintenance of the overlay topology in P2P-TV systems. To this goal, we have implemented different fully-distributed strategies in a P2P-TV application, called Peer- Streamer, that we use to run extensive experimental campaigns in a completely controlled set-up which involves thousands of peers, spanning very different networking scenarios. Results show that the topological properties of the overlay have a deep impact on both user quality of experience and network load. Strategies based solely on random peer selection are greatly outperformed by smart, yet simple strategies that can be implemented with negligible overhead. Even with different and complex scenarios, the neighborhood filtering strategy we devised as most perform- ing guarantees to deliver almost all chunks to all peers with a play-out delay as low as only 6s even with system loads close to 1.0. Results are confirmed by running experiments on PlanetLab. PeerStreamer is open-source to make results reproducible and allow further research by the communit

    Measuring And Improving Internet Video Quality Of Experience

    Get PDF
    Streaming multimedia content over the IP-network is poised to be the dominant Internet traffic for the coming decade, predicted to account for more than 91% of all consumer traffic in the coming years. Streaming multimedia content ranges from Internet television (IPTV), video on demand (VoD), peer-to-peer streaming, and 3D television over IP to name a few. Widespread acceptance, growth, and subscriber retention are contingent upon network providers assuring superior Quality of Experience (QoE) on top of todays Internet. This work presents the first empirical understanding of Internet’s video-QoE capabilities, and tools and protocols to efficiently infer and improve them. To infer video-QoE at arbitrary nodes in the Internet, we design and implement MintMOS: a lightweight, real-time, noreference framework for capturing perceptual quality. We demonstrate that MintMOS’s projections closely match with subjective surveys in accessing perceptual quality. We use MintMOS to characterize Internet video-QoE both at the link level and end-to-end path level. As an input to our study, we use extensive measurements from a large number of Internet paths obtained from various measurement overlays deployed using PlanetLab. Link level degradations of intra– and inter–ISP Internet links are studied to create an empirical understanding of their shortcomings and ways to overcome them. Our studies show that intra–ISP links are often poorly engineered compared to peering links, and that iii degradations are induced due to transient network load imbalance within an ISP. Initial results also indicate that overlay networks could be a promising way to avoid such ISPs in times of degradations. A large number of end-to-end Internet paths are probed and we measure delay, jitter, and loss rates. The measurement data is analyzed offline to identify ways to enable a source to select alternate paths in an overlay network to improve video-QoE, without the need for background monitoring or apriori knowledge of path characteristics. We establish that for any unstructured overlay of N nodes, it is sufficient to reroute key frames using a random subset of k nodes in the overlay, where k is bounded by O(lnN). We analyze various properties of such random subsets to derive simple, scalable, and an efficient path selection strategy that results in a k-fold increase in path options for any source-destination pair; options that consistently outperform Internet path selection. Finally, we design a prototype called source initiated frame restoration (SIFR) that employs random subsets to derive alternate paths and demonstrate its effectiveness in improving Internet video-QoE

    Video streaming with quality adaption using collaborative active grid networks

    Get PDF
    Due to the services and demands of the end users, Distributed Computing (Grid Technology, Web Services, and Peer-to-Peer) has been developedrapidJy in thelastyears. Theconvergence of these architectures has been possible using mechanisms such as Collaborative work and Resources Sharing. Grid computing is a platform to enable flexible, secure, controlled, scalable, ubiquitous and heterogeneous services. On the other hand, Video Streaming applications demand a greater deployment over connected Internet users. The present work uses the Acti ve Grid technology as a fundamental platform to give a solution of multimediacontentrecovery. This solution takes into account the following key concepts: collaborative work, multi-source recovery and adapti ve quality. A new archi tecture is designed to deliver video content over a Grid Network. The acti ve and passi ve roles of the nodes are important to guarantee a high quality and efficiency for the video streaming system. The acti ve sender nodes are the content suppliers, while the passive sender nodes wiU perform the backup functions, based on global resource control policies. The aim of the backup node is minirnize the time to restore the systemin caseoffailures. In this way, all participant peers work in a collaborati ve manner following a mul ti -source recovery scheme. Furthermore, Video La yered Encoding is used to manage the video data in a high scalable way, di viding the video in multiple layers. This video codification scheme enables thequality adaptation according to the availability of system resources. In addition, a buffer by sender peer and by layer is needed for an effecti ve control ofthe video retrieve. The QoS will fit considering the state of each buffer and the measurement tools provide by the Acti ve Grid on the network nodes. Ke ywords: Peer -to-Peer Grid Architecture, Services for Active Grids, Streaming Media, Layered Coding, Quality Adaptation, CoUaborative Work.Peer Reviewe

    Video streaming with quality adaption using collaborative active grid networks

    Get PDF
    Due to the services and demands of the end users, Distributed Computing (Grid Technology, Web Services, and Peer-to-Peer) has been developedrapidJy in thelastyears. Theconvergence of these architectures has been possible using mechanisms such as Collaborative work and Resources Sharing. Grid computing is a platform to enable flexible, secure, controlled, scalable, ubiquitous and heterogeneous services. On the other hand, Video Streaming applications demand a greater deployment over connected Internet users. The present work uses the Acti ve Grid technology as a fundamental platform to give a solution of multimediacontentrecovery. This solution takes into account the following key concepts: collaborative work, multi-source recovery and adapti ve quality. A new archi tecture is designed to deliver video content over a Grid Network. The acti ve and passi ve roles of the nodes are important to guarantee a high quality and efficiency for the video streaming system. The acti ve sender nodes are the content suppliers, while the passive sender nodes wiU perform the backup functions, based on global resource control policies. The aim of the backup node is minirnize the time to restore the systemin caseoffailures. In this way, all participant peers work in a collaborati ve manner following a mul ti -source recovery scheme. Furthermore, Video La yered Encoding is used to manage the video data in a high scalable way, di viding the video in multiple layers. This video codification scheme enables thequality adaptation according to the availability of system resources. In addition, a buffer by sender peer and by layer is needed for an effecti ve control ofthe video retrieve. The QoS will fit considering the state of each buffer and the measurement tools provide by the Acti ve Grid on the network nodes. Ke ywords: Peer -to-Peer Grid Architecture, Services for Active Grids, Streaming Media, Layered Coding, Quality Adaptation, CoUaborative Work.Peer Reviewe

    QoS monitoring in real-time streaming overlays based on lock-free data structures

    Get PDF
    AbstractPeer-to-peer streaming is a well-known technology for the large-scale distribution of real-time audio/video contents. Delay requirements are very strict in interactive real-time scenarios (such as synchronous distance learning), where playback lag should be of the order of seconds. Playback continuity is another key aspect in these cases: in presence of peer churning and network congestion, a peer-to-peer overlay should quickly rearrange connections among receiving nodes to avoid freezing phenomena that may compromise audio/video understanding. For this reason, we designed a QoS monitoring algorithm that quickly detects broken or congested links: each receiving node is able to independently decide whether it should switch to a secondary sending node, called "fallback node". The architecture takes advantage of a multithreaded design based on lock-free data structures, which improve the performance by avoiding synchronization among threads. We will show the good responsiveness of the proposed approach on machines with different computational capabilities: measured times prove both departures of nodes and QoS degradations are promptly detected and clients can quickly restore a stream reception. According to PSNR and SSIM, two well-known full-reference video quality metrics, QoE remains acceptable on receiving nodes of our resilient overlay also in presence of swap procedures
    • …
    corecore