5 research outputs found

    Layered Wyner-Ziv video coding: a new approach to video compression and delivery

    Get PDF
    Following recent theoretical works on successive Wyner-Ziv coding, we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantiza- tion, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered Wyner-Ziv coding for quality enhance- ment. Similar to FGS coding, there is no performance di®erence between layered and monolithic Wyner-Ziv coding when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that Wyner-Ziv coding gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks. For scalable video transmission over the Internet and 3G wireless networks, we propose a system for receiver-driven layered multicast based on layered Wyner-Ziv video coding and digital fountain coding. Digital fountain codes are near-capacity erasure codes that are ideally suited for multicast applications because of their rate- less property. By combining an error-resilient Wyner-Ziv video coder and rateless fountain codes, our system allows reliable multicast of high-quality video to an arbi- trary number of heterogeneous receivers without the requirement of feedback chan- nels. Extending this work on separate source-channel coding, we consider distributed joint source-channel coding by using a single channel code for both video compression (via Slepian-Wolf coding) and packet loss protection. We choose Raptor codes - the best approximation to a digital fountain - and address in detail both encoder and de- coder designs. Simulation results show that, compared to one separate design using Slepian-Wolf compression plus erasure protection and another based on FGS coding plus erasure protection, the proposed joint design provides better video quality at the same number of transmitted packets

    MASCOT : metadata for advanced scalable video coding tools : final report

    Get PDF
    The goal of the MASCOT project was to develop new video coding schemes and tools that provide both an increased coding efficiency as well as extended scalability features compared to technology that was available at the beginning of the project. Towards that goal the following tools would be used: - metadata-based coding tools; - new spatiotemporal decompositions; - new prediction schemes. Although the initial goal was to develop one single codec architecture that was able to combine all new coding tools that were foreseen when the project was formulated, it became clear that this would limit the selection of the new tools. Therefore the consortium decided to develop two codec frameworks within the project, a standard hybrid DCT-based codec and a 3D wavelet-based codec, which together are able to accommodate all tools developed during the course of the project

    Real-time video streaming using peer-to-peer for video distribution

    Get PDF
    The growth of the Internet has led to research and development of several new and useful applications including video streaming. Commercial experiments are underway to determine the feasibility of multimedia broadcasting using packet based data networks alongside traditional over-the-air broadcasting. Broadcasting companies are offering low cost or free versions of video content online to both guage and at the same time generate popularity. In addition to television broadcasting, video streaming is used in a number of application areas including video conferencing, telecommuting and long distance education. Large scale video streaming has not become as widespread or widely deployed as could be expected. The reason for this is the high bandwidth requirement (and thus high cost) associated with video data. Provision of a constant stream of video data on a medium to large scale typically consumes a significant amount of bandwidth. An effect of this is that encoding bit rates are lowered and consequently video quality is degraded resulting in even slower uptake rates for video streaming services. The aim of this dissertation is to investigate peer-to-peer streaming as a potential solution to this bandwidth problem. The proposed peer-to-peer based solution relies on end user co-operation for video data distribution. This approach is highly effective in reducing the outgoing bandwidth requirement for the video streaming server. End users redistribute received video chunks amongst their respective peers and in so doing increase the potential capacity of the entire network for supporting more clients. A secondary effect of such a system is that encoding capabilities (including higher encoding bit rates or encoding of additional sub-channels) can be enhanced. Peer-to-peer distribution enables any regular user to stream video to large streaming networks with many viewers. This research includes a detailed overview of the fields of video streaming and peer-to-peer networking. Techniques for optimal video preparation and data distribution were investigated. A variety of academic and commercial peer-to-peer based multimedia broadcasting systems were analysed as a means to further define and place the proposed implementation in context with respect to other peercasting implementations. A proof-of-concept of the proposed implementation was developed, mathematically analyzed and simulated in a typical deployment scenario. Analysis was carried out to predict simulation performance and as a form of design evaluation and verification. The analysis highlighted some critical areas which resulted in adaptations to the initial design as well as conditions under which performance can be guaranteed. A simulation of the proof-of-concept system was used to determine the extent of bandwidth savings for the video server. The aim of the simulations was to show that it is possible to encode and deliver video data in real time over a peer-to-peer network. The proposed system achieved expectations and showed significant bandwidth savings for a sustantially large video streaming audience. The implementation was able to encode video in real time and continually stream video packets on time to connected peers while continually supporting network growth by connecting additional peers (or stream viewers). The system performed well and showed good performance under typical real world restrictions on available bandwith capacity.Dissertation (MEng)--University of Pretoria, 2009.Electrical, Electronic and Computer Engineeringunrestricte

    Parallelism and the software-hardware interface in embedded systems

    Get PDF
    This thesis by publications addresses issues in the architecture and microarchitecture of next generation, high performance streaming Systems-on-Chip through quantifying the most important forms of parallelism in current and emerging embedded system workloads. The work consists of three major research tracks, relating to data level parallelism, thread level parallelism and the software-hardware interface which together reflect the research interests of the author as they have been formed in the last nine years. Published works confirm that parallelism at the data level is widely accepted as the most important performance leverage for the efficient execution of embedded media and telecom applications and has been exploited via a number of approaches the most efficient being vectorlSIMD architectures. A further, complementary and substantial form of parallelism exists at the thread level but this has not been researched to the same extent in the context of embedded workloads. For the efficient execution of such applications, exploitation of both forms of parallelism is of paramount importance. This calls for a new architectural approach in the software-hardware interface as its rigidity, manifested in all desktop-based and the majority of embedded CPU's, directly affects the performance ofvectorized, threaded codes. The author advocates a holistic, mature approach where parallelism is extracted via automatic means while at the same time, the traditionally rigid hardware-software interface is optimized to match the temporal and spatial behaviour of the embedded workload. This ultimate goal calls for the precise study of these forms of parallelism for a number of applications executing on theoretical models such as instruction set simulators and parallel RAM machines as well as the development of highly parametric microarchitectural frameworks to encapSUlate that functionality.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore