182 research outputs found

    A fully scalable wavelet video coding scheme with homologous inter-scale prediction

    Get PDF
    In this paper, we present a fully scalable wavelet-based video coding architecture called STP-Tool, in which motion-compensated temporal-filtered subbands of spatially scaled versions of a video sequence can be used as a base layer for inter-scale predictions. These predictions take place in a pyramidal closed-loop structure between homologous resolution data, i.e., without the need of spatial interpolation. The presented implementation of the STP-Tool architecture is based on the reference software of the Wavelet Video Coding MPEG Ad-Hoc Group. The STP-Tool architecture makes it possible to compensate for some of the typical drawbacks of current wavelet-based scalable video coding architectures and shows interesting objective and visual results even when compared with other wavelet-based or MPEG-4 AVC/H.264-based scalable video coding systems

    Generic techniques to reduce SVC enhancement layer encoding complexity

    Get PDF
    Scalable video coding is an important mechanism to provide several types of end-user devices with different versions of the same encoded bitstream. However, scalable video encoding remains a computationally expensive operation. To decrease the complexity we propose generic techniques. These techniques are generic in a sense that they can be combined with existing fast mode decision methods and optimizations. We show that extending such an existing fast mode decision technique yields an average complexity reduction of 87.27%, while only an additional 0.74% of bit rate increase and a decrease of 0.11dB in PSNR is required, compared to the original fast mode decision method(1)

    Transparent encryption with scalable video communication: Lower-latency, CABAC-based schemes

    Get PDF
    Selective encryption masks all of the content without completely hiding it, as full encryption would do at a cost in encryption delay and increased bandwidth. Many commercial applications of video encryption do not even require selective encryption, because greater utility can be gained from transparent encryption, i.e. allowing prospective viewers to glimpse a reduced quality version of the content as a taster. Our lightweight selective encryption scheme when applied to scalable video coding is well suited to transparent encryption. The paper illustrates the gains in reducing delay and increased distortion arising from a transparent encryption that leaves reduced quality base layer in the clear. Reduced encryption of B-frames is a further step beyond transparent encryption in which the computational overhead reduction is traded against content security and limited distortion. This spectrum of video encryption possibilities is analyzed in this paper, though all of the schemes maintain decoder compatibility and add no bitrate overhead as a result of jointly encoding and encrypting the input video by virtue of carefully selecting the entropy coding parameters that are encrypted. The schemes are suitable both for H.264 and HEVC codecs, though demonstrated in the paper for H.264. Selected Content Adaptive Binary Arithmetic Coding (CABAC) parameters are encrypted by a lightweight Exclusive OR technique, which is chosen for practicality

    Error resilient H.264 coded video transmission over wireless channels

    Get PDF
    The H.264/AVC recommendation was first published in 2003 and builds on the concepts of earlier standards such as MPEG-2 and MPEG-4. The H.264 recommendation represents an evolution of the existing video coding standards and was developed in response to the growing need for higher compression. Even though H.264 provides for greater compression, H.264 compressed video streams are very prone to channel errors in mobile wireless fading channels such as 3G due to high error rates experienced. Common video compression techniques include motion compensation, prediction methods, transformation, quantization and entropy coding, which are the common elements of a hybrid video codecs. The ITU-T recommendation H.264 introduces several new error resilience tools, as well as several new features such as Intra Prediction and Deblocking Filter. The channel model used for the testing was the Rayleigh Fading channel with the noise component simulated as Additive White Gaussian Noise (AWGN) using QPSK as the modulation technique. The channel was used over several Eb/N0 values to provide similar bit error rates as those found in the literature. Though further research needs to be conducted, results have shown that when using the H.264 error resilience tools in protecting encoded bitstreams to minor channel errors improvement in the decoded video quality can be observed. The tools did not perform as well with mild and severe channel errors significant as the resultant bitstream was too corrupted. From this, further research in channel coding techniques is needed to determine if the bitstream can be protected from these sorts of error rate

    Compressed-domain transcoding of H.264/AVC and SVC video streams

    Get PDF

    GRACE: Loss-Resilient Real-Time Video through Neural Codecs

    Full text link
    In real-time video communication, retransmitting lost packets over high-latency networks is not viable due to strict latency requirements. To counter packet losses without retransmission, two primary strategies are employed -- encoder-based forward error correction (FEC) and decoder-based error concealment. The former encodes data with redundancy before transmission, yet determining the optimal redundancy level in advance proves challenging. The latter reconstructs video from partially received frames, but dividing a frame into independently coded partitions inherently compromises compression efficiency, and the lost information cannot be effectively recovered by the decoder without adapting the encoder. We present a loss-resilient real-time video system called GRACE, which preserves the user's quality of experience (QoE) across a wide range of packet losses through a new neural video codec. Central to GRACE's enhanced loss resilience is its joint training of the neural encoder and decoder under a spectrum of simulated packet losses. In lossless scenarios, GRACE achieves video quality on par with conventional codecs (e.g., H.265). As the loss rate escalates, GRACE exhibits a more graceful, less pronounced decline in quality, consistently outperforming other loss-resilient schemes. Through extensive evaluation on various videos and real network traces, we demonstrate that GRACE reduces undecodable frames by 95% and stall duration by 90% compared with FEC, while markedly boosting video quality over error concealment methods. In a user study with 240 crowdsourced participants and 960 subjective ratings, GRACE registers a 38% higher mean opinion score (MOS) than other baselines
    • 

    corecore