4,810 research outputs found

    Distributed video coding for wireless video sensor networks: a review of the state-of-the-art architectures

    Get PDF
    Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian–Wolf and Wyner–Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs

    Fulcrum: Flexible Network Coding for Heterogeneous Devices

    Get PDF
    Producción CientíficaWe introduce Fulcrum, a network coding framework that achieves three seemingly conflicting objectives: 1) to reduce the coding coefficient overhead down to nearly n bits per packet in a generation of n packets; 2) to conduct the network coding using only Galois field GF(2) operations at intermediate nodes if necessary, dramatically reducing computing complexity in the network; and 3) to deliver an end-to-end performance that is close to that of a high-field network coding system for high-end receivers, while simultaneously catering to low-end receivers that decode in GF(2). As a consequence of 1) and 3), Fulcrum has a unique trait missing so far in the network coding literature: providing the network with the flexibility to distribute computational complexity over different devices depending on their current load, network conditions, or energy constraints. At the core of our framework lies the idea of precoding at the sources using an expansion field GF(2 h ), h > 1, to increase the number of dimensions seen by the network. Fulcrum can use any high-field linear code for precoding, e.g., Reed-Solomon or Random Linear Network Coding (RLNC). Our analysis shows that the number of additional dimensions created during precoding controls the trade-off between delay, overhead, and computing complexity. Our implementation and measurements show that Fulcrum achieves similar decoding probabilities as high field RLNC but with encoders and decoders that are an order of magnitude faster.Green Mobile Cloud project (grant DFF-0602-01372B)Colorcast project (grant DFF-0602-02661B)TuneSCode project (grant DFF - 1335-00125)Danish Council for Independent Research (grant DFF-4002-00367)Ministerio de Economía, Industria y Competitividad - Fondo Europeo de Desarrollo Regional (grants MTM2012-36917-C03-03 / MTM2015-65764-C3-2-P / MTM2015-69138-REDT)Agencia Estatal de Investigación - Fondo Social Europeo (grant RYC-2016-20208)Aarhus Universitets Forskningsfond Starting (grant AUFF-2017-FLS-7-1

    Error-resilient performance of Dirac video codec over packet-erasure channel

    Get PDF
    Video transmission over the wireless or wired network requires error-resilient mechanism since compressed video bitstreams are sensitive to transmission errors because of the use of predictive coding and variable length coding. This paper investigates the performance of a simple and low complexity error-resilient coding scheme which combines source and channel coding to protect compressed bitstream of wavelet-based Dirac video codec in the packet-erasure channel. By partitioning the wavelet transform coefficients of the motion-compensated residual frame into groups and independently processing each group using arithmetic and Forward Error Correction (FEC) coding, Dirac could achieves the robustness to transmission errors by giving the video quality which is gracefully decreasing over a range of packet loss rates up to 30% when compared with conventional FEC only methods. Simulation results also show that the proposed scheme using multiple partitions can achieve up to 10 dB PSNR gain over its existing un-partitioned format. This paper also investigates the error-resilient performance of the proposed scheme in comparison with H.264 over packet-erasure channel

    A survey of digital television broadcast transmission techniques

    No full text
    This paper is a survey of the transmission techniques used in digital television (TV) standards worldwide. With the increase in the demand for High-Definition (HD) TV, video-on-demand and mobile TV services, there was a real need for more bandwidth-efficient, flawless and crisp video quality, which motivated the migration from analogue to digital broadcasting. In this paper we present a brief history of the development of TV and then we survey the transmission technology used in different digital terrestrial, satellite, cable and mobile TV standards in different parts of the world. First, we present the Digital Video Broadcasting standards developed in Europe for terrestrial (DVB-T/T2), for satellite (DVB-S/S2), for cable (DVB-C) and for hand-held transmission (DVB-H). We then describe the Advanced Television System Committee standards developed in the USA both for terrestrial (ATSC) and for hand-held transmission (ATSC-M/H). We continue by describing the Integrated Services Digital Broadcasting standards developed in Japan for Terrestrial (ISDB-T) and Satellite (ISDB-S) transmission and then present the International System for Digital Television (ISDTV), which was developed in Brazil by adopteding the ISDB-T physical layer architecture. Following the ISDTV, we describe the Digital Terrestrial television Multimedia Broadcast (DTMB) standard developed in China. Finally, as a design example, we highlight the physical layer implementation of the DVB-T2 standar

    Loss-resilient Coding of Texture and Depth for Free-viewpoint Video Conferencing

    Full text link
    Free-viewpoint video conferencing allows a participant to observe the remote 3D scene from any freely chosen viewpoint. An intermediate virtual viewpoint image is commonly synthesized using two pairs of transmitted texture and depth maps from two neighboring captured viewpoints via depth-image-based rendering (DIBR). To maintain high quality of synthesized images, it is imperative to contain the adverse effects of network packet losses that may arise during texture and depth video transmission. Towards this end, we develop an integrated approach that exploits the representation redundancy inherent in the multiple streamed videos a voxel in the 3D scene visible to two captured views is sampled and coded twice in the two views. In particular, at the receiver we first develop an error concealment strategy that adaptively blends corresponding pixels in the two captured views during DIBR, so that pixels from the more reliable transmitted view are weighted more heavily. We then couple it with a sender-side optimization of reference picture selection (RPS) during real-time video coding, so that blocks containing samples of voxels that are visible in both views are more error-resiliently coded in one view only, given adaptive blending will erase errors in the other view. Further, synthesized view distortion sensitivities to texture versus depth errors are analyzed, so that relative importance of texture and depth code blocks can be computed for system-wide RPS optimization. Experimental results show that the proposed scheme can outperform the use of a traditional feedback channel by up to 0.82 dB on average at 8% packet loss rate, and by as much as 3 dB for particular frames
    corecore