121,451 research outputs found

    Joint Source Channel Rate-Distortion Analysis for Adaptive Mode Selection and Rate Control in Wireless Video Coding

    Get PDF
    DOI 10.1109/TCSVT.2002.800313In this paper, we first develop a rate-distortion (R-D) model for DCT-based video coding incorporating the macroblock (MB) intra refreshing rate. For any given bit rate and intra refreshing rate, this model is capable of estimating the corresponding coding distortion even before a video frame is coded. We then present a theoretical analysis of the picture distortion caused by channel errors and the subsequent inter-frame propagation. Based on this analysis, we develop a statistical model to estimate such channel errors induced distortion for different channel conditions and encoder settings. The proposed analytic model mathematically describes the complex behavior of channel errors in a video coding and transmission system. Unlike other experimental approaches for distortion estimation reported in the literature, this analytic model has very low computational complexity and implementation cost, which are highly desirable in wireless video applications. Simulation results show that this model is able to accurately estimate the channel errors induced distortion with a minimum delay in processing. Based on the proposed source coding R-D model and the analytic channel-distortion estimation, we derive an analytic solution for adaptive intra mode selection and joint source-channel rate control under time-varying wireless channel conditions. Extensive experimental results demonstrate that this scheme significantly improves the end-to-end video quality in wireless video coding and transmission

    Optimal Control of Wireless Computing Networks

    Full text link
    Augmented information (AgI) services allow users to consume information that results from the execution of a chain of service functions that process source information to create real-time augmented value. Applications include real-time analysis of remote sensing data, real-time computer vision, personalized video streaming, and augmented reality, among others. We consider the problem of optimal distribution of AgI services over a wireless computing network, in which nodes are equipped with both communication and computing resources. We characterize the wireless computing network capacity region and design a joint flow scheduling and resource allocation algorithm that stabilizes the underlying queuing system while achieving a network cost arbitrarily close to the minimum, with a tradeoff in network delay. Our solution captures the unique chaining and flow scaling aspects of AgI services, while exploiting the use of the broadcast approach coding scheme over the wireless channel.Comment: 30 pages, journa

    Scalable Video Streaming with Prioritised Network Coding on End-System Overlays

    Get PDF
    PhDDistribution over the internet is destined to become a standard approach for live broadcasting of TV or events of nation-wide interest. The demand for high-quality live video with personal requirements is destined to grow exponentially over the next few years. Endsystem multicast is a desirable option for relieving the content server from bandwidth bottlenecks and computational load by allowing decentralised allocation of resources to the users and distributed service management. Network coding provides innovative solutions for a multitude of issues related to multi-user content distribution, such as the coupon-collection problem, allocation and scheduling procedure. This thesis tackles the problem of streaming scalable video on end-system multicast overlays with prioritised push-based streaming. We analyse the characteristic arising from a random coding process as a linear channel operator, and present a novel error detection and correction system for error-resilient decoding, providing one of the first practical frameworks for Joint Source-Channel-Network coding. Our system outperforms both network error correction and traditional FEC coding when performed separately. We then present a content distribution system based on endsystem multicast. Our data exchange protocol makes use of network coding as a way to collaboratively deliver data to several peers. Prioritised streaming is performed by means of hierarchical network coding and a dynamic chunk selection for optimised rate allocation based on goodput statistics at application layer. We prove, by simulated experiments, the efficient allocation of resources for adaptive video delivery. Finally we describe the implementation of our coding system. We highlighting the use rateless coding properties, discuss the application in collaborative and distributed coding systems, and provide an optimised implementation of the decoding algorithm with advanced CPU instructions. We analyse computational load and packet loss protection via lab tests and simulations, complementing the overall analysis of the video streaming system in all its components

    Joint learning for side information and correlation model based on linear regression model in distributed video coding

    Full text link
    The coding efficiency of distributed video coding system is significantly determined by the side information quality and correlation model. Motivated by theoretical analysis of the maximum likelihood treatment for linear regression model, we propose a novel joint online learning model for side information generation and correlation model estimation in this paper. In our proposed scheme, each pixel in the side information is approximated as the linear weighted combination of samples within a local spatio-temporal neighboring space. Weights are trained in a self-feedback fashion, during which the correlation model parameters can also be achieved. The efficiency of the proposed joint learning model is confirmed experimentally. ?2009 IEEE.EI

    Analisis dan Simulasi CABAC ( Context Adaptive Binary Arithmetic Coding ) pada Pengkodean H.264 Untuk Aplikasi Melalui Jaringan Wireless LAN

    Get PDF
    ABSTRAKSI: Dalam dunia multimedia gambar dan video merupakan bagian yang selalu berkaitan. Video terdiri dari sederetan citra yang terdiri beberapa pixel, untuk menyimpan dan mentransmisikan informasi video diperlukan kapasitas memori dan bandwidth yang besar Untuk menghasilkan tingkat kompresi tinggi dan bitrate yang kecil maka dikeluarkan standar H.264 AVC. H.264 mendesain Video Coding Layer dan Network Abstraction Layer. Video Coding Layer pada H.264 memiliki elemen dasar yang sama dengan standar pengkodean video sebelumnya yaitu prediksi, transformasi, kuantisasi dan pengkodean entropi. Perubahan pada H.264 terletak pada detail dari elemen dasar dengan adanya penambahan deblocking filter Pada tugas akhir ini dianalisa dan disimulasikan kinerja encoder – decoder pengkodean H.264 melalui jaringan Wireless LAN. Masukan video berasal dari gambar yang diambil dari TV Tuner, kamera dan didownload. Pengkodean entropi yang digunakan adalah CABAC (Context Adaptive Binary Arithmetic Coding ). Sistem enkoder-dekoder H.264 yang menggunakan software referensi Joint Model (JM)1.7, sedangkan untuk analisis teknik pengkodean H.264 menggunakan program aplikasi Evalvid Tool dan NS-2 untuk simulasi jaringan wireless IP 802.11.Dengan teknik pengkodean H.264 pada pengujian sistem tanpa melewati model jaringan Wireless LAN bitrate maksimum dari hasil pengujian diperoleh pada video sequence Stefan yaitu 8597,28 Kbps sedangkan untuk bitrate minimum diperoleh pada video sequence Naa yaitu 2276,66 Kbps. Kinerja maksimum dari pengkodean H.264 melalui model jaringan Wireless LAN didapat pada daerah QP 11 sampai 23 yaitu berkisar antara 38 dB sampai 41 dB. Hasil ekivalensi skala penilaian secara subjektif dengan skala penilaian objektif, karakteristik masukan gambar yang banyak melakukan pergerakan memiliki nilai yang paling rendah sedangkan karekteristik masukan gambar yang tidak banyak melakukan pergerakan memiliki nilai yang tinggi.Kata Kunci : pengkodean H264, CABAC, Wireless LANABSTRACT: In multimedia images and videos are always related parts. Video consists of a series of images comprising a few pixels, to store and transmit video information necessary large capacity of memory and bandwidth.To produce a high compression rate and the small bitrate then issued a coding standard in video compression and called AVC H.264 standard. H.264 has been designed Video Coding Layer and Network Abstraction Layer. Video Coding Layer on H.264 has the same basic elements with the previous video coding standards such as prediction, transformation, quantization and entropy coding. Modification of H.264 lies in the detail of the basic elements with the addition of deblocking filterIn this final assignment is analyzed and simulated the performance of encoder - decoder H.264 by using Wireless LAN network. Video input taken from TV tuner, mobile phone camera and downloaded from internet. Entropy coding in H264 coding is CABAC (Context Adaptive Binary Arithmetic Coding). H264 encoder-decoder system uses Joint Model reference software (JM) 1.7, whereas for the analysis of H.264 coding techniques using application programs Evalvid Tool and NS-2 wireless IP 802.11 network for simulations.In the system without passing through the Wireless LAN network model, maximum bitrate of the test results obtained at the Stefan video sequence is 8597.28 Kbps whereas the minimum bitrate for a video sequence obtained at the Naa 2276.66 Kbps. Maximum performance of H.264 encoding through Wireless LAN network model obtained on the QP region of 11 to 23 ranged from 38 dB to 41 dB. Equivalence results in a subjective rating scale with an objective rating scale, the characteristics of the input images do a lot of movement has the lowest value while the characteristics of the input image that is not much movement has a high value.Keyword: H264 coding, CABAC, Wireless LAN

    A joint motion & disparity motion estimation technique for 3D integral video compression using evolutionary strategy

    Get PDF
    3D imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging, which can capture true 3D color images with only one camera, has been seen as the right technology to offer stress-free viewing to audiences of more than one person. Just like any digital video, 3D video sequences must also be compressed in order to make it suitable for consumer domain applications. However, ordinary compression techniques found in state-of-the-art video coding standards such as H.264, MPEG-4 and MPEG-2 are not capable of producing enough compression while preserving the 3D clues. Fortunately, a huge amount of redundancies can be found in an integral video sequence in terms of motion and disparity. This paper discusses a novel approach to use both motion and disparity information to compress 3D integral video sequences. We propose to decompose the integral video sequence down to viewpoint video sequences and jointly exploit motion and disparity redundancies to maximize the compression. We further propose an optimization technique based on evolutionary strategies to minimize the computational complexity of the joint motion disparity estimation. Experimental results demonstrate that Joint Motion and Disparity Estimation can achieve over 1 dB objective quality gain over normal motion estimation. Once combined with Evolutionary strategy, this can achieve up to 94% computational cost saving
    • 

    corecore