2,879 research outputs found

    Network coding meets multimedia: a review

    Get PDF
    While every network node only relays messages in a traditional communication system, the recent network coding (NC) paradigm proposes to implement simple in-network processing with packet combinations in the nodes. NC extends the concept of "encoding" a message beyond source coding (for compression) and channel coding (for protection against errors and losses). It has been shown to increase network throughput compared to traditional networks implementation, to reduce delay and to provide robustness to transmission errors and network dynamics. These features are so appealing for multimedia applications that they have spurred a large research effort towards the development of multimedia-specific NC techniques. This paper reviews the recent work in NC for multimedia applications and focuses on the techniques that fill the gap between NC theory and practical applications. It outlines the benefits of NC and presents the open challenges in this area. The paper initially focuses on multimedia-specific aspects of network coding, in particular delay, in-network error control, and mediaspecific error control. These aspects permit to handle varying network conditions as well as client heterogeneity, which are critical to the design and deployment of multimedia systems. After introducing these general concepts, the paper reviews in detail two applications that lend themselves naturally to NC via the cooperation and broadcast models, namely peer-to-peer multimedia streaming and wireless networkin

    EVEREST IST - 2002 - 00185 : D23 : final report

    Get PDF
    Deliverable pĂşblic del projecte europeu EVERESTThis deliverable constitutes the final report of the project IST-2002-001858 EVEREST. After its successful completion, the project presents this document that firstly summarizes the context, goal and the approach objective of the project. Then it presents a concise summary of the major goals and results, as well as highlights the most valuable lessons derived form the project work. A list of deliverables and publications is included in the annex.Postprint (published version

    Effective Delay Control in Online Network Coding

    Full text link
    Motivated by streaming applications with stringent delay constraints, we consider the design of online network coding algorithms with timely delivery guarantees. Assuming that the sender is providing the same data to multiple receivers over independent packet erasure channels, we focus on the case of perfect feedback and heterogeneous erasure probabilities. Based on a general analytical framework for evaluating the decoding delay, we show that existing ARQ schemes fail to ensure that receivers with weak channels are able to recover from packet losses within reasonable time. To overcome this problem, we re-define the encoding rules in order to break the chains of linear combinations that cannot be decoded after one of the packets is lost. Our results show that sending uncoded packets at key times ensures that all the receivers are able to meet specific delay requirements with very high probability.Comment: 9 pages, IEEE Infocom 200

    On reducing mesh delay for peer-to-peer live streaming

    Get PDF
    Peer-to-peer (P2P) technology has emerged as a promising scalable solution for live streaming to large group. In this paper, we address the design of overlay which achieves low source-to-peer delay, is robust to user churn, accommodates of asymmetric and diverse uplink bandwidth, and continuously improves based on existing user pool. A natural choice is the use of mesh, where each peer is served by multiple parents. Since the peer delay in a mesh depends on its longest path through its parents, we study how to optimize such delay while meeting a certain streaming rate requirement. We first formulate the minimum delay mesh problem and show that it is NP-hard. Then we propose a centralized heuristic based on complete knowledge which serves as our benchmark and optimal solution for all the other schemes under comparison. Our heuristic makes use of the concept of power in network given by the ratio of throughput and delay. By maximizing the network power, our heuristic achieves very low delay. We then propose a simple distributed algorithm where peers select their parents based on the power concept. The algorithm makes continuous improvement on delay until some minimum delay is reached. Simulation results show that our distributed protocol performs close to the centralized one, and substantially outperforms traditional and state-of-the-art approaches

    Network coding for transport protocols

    Get PDF
    With the proliferation of smart devices that require Internet connectivity anytime, anywhere, and the recent technological advances that make it possible, current networked systems will have to provide a various range of services, such as content distribution, in a wide range of settings, including wireless environments. Wireless links may experience temporary losses, however, TCP, the de facto protocol for robust unicast communications, reacts by reducing the congestion window drastically and injecting less traffic in the network. Consequently the wireless links are underutilized and the overall performance of the TCP protocol in wireless environments is poor. As content delivery (i.e. multicasting) services, such as BBC iPlayer, become popular, the network needs to support the reliable transport of the data at high rates, and with specific delay constraints. A typical approach to deliver content in a scalable way is to rely on peer-to-peer technology (used by BitTorrent, Spotify and PPLive), where users share their resources, including bandwidth, storage space, and processing power. Still, these systems suffer from the lack of incentives for resource sharing and cooperation, and this problem is exacerbated in the presence of heterogenous users, where a tit-for-tat scheme is difficult to implement. Due to the issues highlighted above, current network architectures need to be changed in order to accommodate the usersÂż demands for reliable and quality communications. In other words, the emergent need for advanced modes of information transport requires revisiting and improving network components at various levels of the network stack. The innovative paradigm of network coding has been shown as a promising technique to change the design of networked systems, by providing a shift from how data flows traditionally move through the network. This shift implies that data flows are no longer kept separate, according to the Âżstore-and-forwardÂż model, but they are also processed and mixed in the network. By appropriately combining data by means of network coding, it is expected to obtain significant benefits in several areas of network design and architecture. In this thesis, we set out to show the benefits of including network coding into three communication paradigms, namely point-topoint communications (e.g. unicast), point-to-multipoint communications (e.g. multicast), and multipoint-to-multipoint communications (e.g. peer-to-peer networks). For the first direction, we propose a network coding-based multipath scheme and show that TCP unicast sessions are feasible in highly volatile wireless environments. For point-to-multipoint communications, we give an algorithm to optimally achieve all the rate pairs from the rate region in the case of degraded multicast over the combination network. We also propose a system for live streaming that ensures reliability and quality of service to heterogenous users, even if data transmissions occur over lossy wireless links. Finally, for multipoint-to-multipoint communications, we design a system to provide incentives for live streaming in a peer-to-peer setting, where users have subscribed to different levels of quality. Our work shows that network coding enables a reliable transport of data, even in highly volatile environments, or in delay sensitive scenarios such as live streaming, and facilitates the implementation of an efficient incentive system, even in the presence of heterogenous users. Thus, network coding can solve the challenges faced by next generation networks in order to support advanced information transport.Postprint (published version

    A Review of MAC Scheduling Algorithms in LTE System

    Get PDF
    The recent wireless communication networks rely on the new technology named Long Term Evolution (LTE) to offer high data rate real-time (RT) traffic with better Quality of Service (QoS) for the increasing demand of customer requirement. LTE provide low latency for real-time services with high throughput, with the help of two-level packet retransmission. Hybrid Automatic Repeat Request (HARQ) retransmission at the Medium Access Control (MAC) layer of LTE networks achieves error-free data transmission. The performance of the LTE networks mainly depends on how effectively this HARQ adopted in the latest communication standard, Universal Mobile Telecommunication System (UMTS). The major challenge in LTE is to balance QoS and fairness among the users. Hence, it is very essential to design a down link scheduling scheme to get the expected service quality to the customers and to utilize the system resources efficiently. This paper provides a comprehensive literature review of LTE MAC layer and six types of QoS/Channel-aware downlink scheduling algorithms designed for this purpose. The contributions of this paper are to identify the gap of knowledge in the downlink scheduling procedure and to point out the future research direction. Based on the comparative study of algorithms taken for the review, this paper is concluded that the EXP Rule scheduler is most suited for LTE networks due to its characteristics of less Packet Loss Ratio (PLR), less Packet Delay (PD), high throughput, fairness and spectral efficiency

    User-Centric Quality of Service Provisioning in IP Networks

    Get PDF
    The Internet has become the preferred transport medium for almost every type of communication, continuing to grow, both in terms of the number of users and delivered services. Efforts have been made to ensure that time sensitive applications receive sufficient resources and subsequently receive an acceptable Quality of Service (QoS). However, typical Internet users no longer use a single service at a given point in time, as they are instead engaged in a multimedia-rich experience, comprising of many different concurrent services. Given the scalability problems raised by the diversity of the users and traffic, in conjunction with their increasing expectations, the task of QoS provisioning can no longer be approached from the perspective of providing priority to specific traffic types over coexisting services; either through explicit resource reservation, or traffic classification using static policies, as is the case with the current approach to QoS provisioning, Differentiated Services (Diffserv). This current use of static resource allocation and traffic shaping methods reveals a distinct lack of synergy between current QoS practices and user activities, thus highlighting a need for a QoS solution reflecting the user services. The aim of this thesis is to investigate and propose a novel QoS architecture, which considers the activities of the user and manages resources from a user-centric perspective. The research begins with a comprehensive examination of existing QoS technologies and mechanisms, arguing that current QoS practises are too static in their configuration and typically give priority to specific individual services rather than considering the user experience. The analysis also reveals the potential threat that unresponsive application traffic presents to coexisting Internet services and QoS efforts, and introduces the requirement for a balance between application QoS and fairness. This thesis proposes a novel architecture, the Congestion Aware Packet Scheduler (CAPS), which manages and controls traffic at the point of service aggregation, in order to optimise the overall QoS of the user experience. The CAPS architecture, in contrast to traditional QoS alternatives, places no predetermined precedence on a specific traffic; instead, it adapts QoS policies to each individual’s Internet traffic profile and dynamically controls the ratio of user services to maintain an optimised QoS experience. The rationale behind this approach was to enable a QoS optimised experience to each Internet user and not just those using preferred services. Furthermore, unresponsive bandwidth intensive applications, such as Peer-to-Peer, are managed fairly while minimising their impact on coexisting services. The CAPS architecture has been validated through extensive simulations with the topologies used replicating the complexity and scale of real-network ISP infrastructures. The results show that for a number of different user-traffic profiles, the proposed approach achieves an improved aggregate QoS for each user when compared with Best effort Internet, Traditional Diffserv and Weighted-RED configurations. Furthermore, the results demonstrate that the proposed architecture not only provides an optimised QoS to the user, irrespective of their traffic profile, but through the avoidance of static resource allocation, can adapt with the Internet user as their use of services change.France Teleco
    • …
    corecore