292 research outputs found

    Improved image decompression for reduced transform coding artifacts

    Get PDF
    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality

    Image representation and compression using steered hermite transforms

    Get PDF

    NEW CHANGE DETECTION MODELS FOR OBJECT-BASED ENCODING OF PATIENT MONITORING VIDEO

    Get PDF
    The goal of this thesis is to find a highly efficient algorithm to compress patient monitoring video. This type of video mainly contains local motions and a large percentage of idle periods. To specifically utilize these features, we present an object-based approach, which decomposes input video into three objects representing background, slow-motion foreground and fast-motion foreground. Encoding these three video objects with different temporal scalabilities significantly improves the coding efficiency in terms of bitrate vs. visual quality. The video decomposition is built upon change detection which identifies content changes between video frames. To improve the robustness of capturing small changes, we contribute two new change detection models. The model built upon Markov random theory discriminates foreground containing the patient being monitored. The other model, called covariance test method, identifies constantly changing content by exploiting temporal correlation in multiple video frames. Both models show great effectiveness in constructing the defined video objects. We present detailed algorithms of video object construction, as well as experimental results on the object-based coding of patient monitoring video

    Standard-Compliant Low-Pass Temporal Filter to Reduce the Perceived Flicker Artifact

    Get PDF
    Flicker is a common video-compression-related temporal artifact. It occurs when co-located regions of consecutive frames are not encoded in a consistent manner, especially when Intra frames are periodically inserted at low and medium bit rates. In this paper we propose a flicker reduction method which aims to make the luminance changes between pixels in the same area of consecutive frames less noticeable. To this end, a temporal low-pass filtering is proposed that smooths these luminance changes on a block-by-block basis. The proposed method has some advantages compared to another state-of-the-art methods. It has been designed to be compliant with conventional video coding standards, i.e., to generate a bitstream that is decodable by any standard decoder implementation. The filter strength is estimated on-the-fly to limit the PSNR loss and thus the appearance of a noticeable blurring effect. The proposed method has been implemented on the H. 264/AVC reference software and thoroughly assessed in comparison to a couple of state-of-the-art methods. The flicker reduction achieved by the proposed method (calculated using an objective measurement) is notably higher than that of compared methods: 18.78% versus 5.32% and 31.96% versus 8.34%, in exchange of some slight losses in terms of coding efficiency. In terms of subjective quality, the proposed method is perceived more than two times better than the compared methods.This work has been partially supported by the National Grant TEC2011-26807 of the Spanish Ministry of Science and Innovation.Publicad

    Enhanced low bitrate H.264 video coding using decoder-side super-resolution and frame interpolation

    Get PDF
    Advanced inter-prediction modes are introduced recently in literature to improve video coding performances of both H.264 and High Efficiency Video Coding standards. Decoder-side motion analysis and motion vector derivation are proposed to reduce coding costs of motion information. Here, we introduce enhanced skip and direct modes for H.264 coding using decoder-side super-resolution (SR) and frame interpolation. P-and B-frames are downsampled and H.264 encoded at lower resolution (LR). Then reconstructed LR frames are super-resolved using decoder-side motion estimation. Alternatively for B-frames, bidirectional true motion estimation is performed to synthesize a B-frame from its reference frames. For P-frames, bicubic interpolation of the LR frame is used as an alternative to SR reconstruction. A rate-distortion optimal mode selection algorithm is developed to decide for each MB which of the two reconstructions to use as skip/direct mode prediction. Simulations indicate an average of 1.04 dB peak signal-to-noise ratio (PSNR) improvement or 23.0% bitrate reduction at low bitrates when compared with H.264 standard. The PSNR gains reach as high as 3.00 dB for inter-predicted frames and 3.78 dB when only B-frames are considered. Decoded videos exhibit significantly better visual quality as well.This research was supported by TUBITAK Career Grant 108E201Publisher's Versio

    Distributed Video Coding for Resource Critical Applocations

    Get PDF

    Postprocessing of images coded using block DCT at low bit rates.

    Get PDF
    Sun, Deqing.Thesis (M.Phil.)--Chinese University of Hong Kong, 2007.Includes bibliographical references (leaves 86-91).Abstracts in English and Chinese.Abstract --- p.i摘要 --- p.iiiContributions --- p.ivAcknowledgement --- p.viAbbreviations --- p.xviiiNotations --- p.xxiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Image compression and postprocessing --- p.1Chapter 1.2 --- A brief review of postprocessing --- p.3Chapter 1.3 --- Objective and methodology of the research --- p.7Chapter 1.4 --- Thesis organization --- p.8Chapter 1.5 --- A note on publication --- p.9Chapter 2 --- Background Study --- p.11Chapter 2.1 --- Image models --- p.11Chapter 2.1.1 --- Minimum edge difference (MED) criterion for block boundaries --- p.12Chapter 2.1.2 --- van Beek's edge model for an edge --- p.15Chapter 2.1.3 --- Fields of experts (FoE) for an image --- p.16Chapter 2.2 --- Degradation models --- p.20Chapter 2.2.1 --- Quantization constraint set (QCS) and uniform noise --- p.21Chapter 2.2.2 --- Narrow quantization constraint set (NQCS) --- p.22Chapter 2.2.3 --- Gaussian noise --- p.22Chapter 2.2.4 --- Edge width enlargement after quantization --- p.25Chapter 2.3 --- Use of these models for postprocessing --- p.27Chapter 2.3.1 --- MED and edge models --- p.27Chapter 2.3.2 --- The FoE prior model --- p.27Chapter 3 --- Postprocessing using MED and edge models --- p.28Chapter 3.1 --- Blocking artifacts suppression by coefficient restoration --- p.29Chapter 3.1.1 --- AC coefficient restoration by MED --- p.29Chapter 3.1.2 --- General derivation --- p.31Chapter 3.2 --- Detailed algorithm --- p.34Chapter 3.2.1 --- Edge identification --- p.36Chapter 3.2.2 --- Region classification --- p.36Chapter 3.2.3 --- Edge reconstruction --- p.37Chapter 3.2.4 --- Image reconstruction --- p.37Chapter 3.3 --- Experimental results --- p.38Chapter 3.3.1 --- Results of the proposed method --- p.38Chapter 3.3.2 --- Comparison with one wavelet-based method --- p.39Chapter 3.4 --- On the global minimum of the edge difference . . --- p.41Chapter 3.4.1 --- The constrained minimization problem . . --- p.41Chapter 3.4.2 --- Experimental examination --- p.42Chapter 3.4.3 --- Discussions --- p.43Chapter 3.5 --- Conclusions --- p.44Chapter 4 --- Postprocessing by the MAP criterion using FoE --- p.49Chapter 4.1 --- The proposed method --- p.49Chapter 4.1.1 --- The MAP criterion --- p.49Chapter 4.1.2 --- The optimization problem --- p.51Chapter 4.2 --- Experimental results --- p.52Chapter 4.2.1 --- Setting algorithm parameters --- p.53Chapter 4.2.2 --- Results --- p.56Chapter 4.3 --- Investigation on the quantization noise model . . --- p.58Chapter 4.4 --- Conclusions --- p.61Chapter 5 --- Conclusion --- p.71Chapter 5.1 --- Contributions --- p.71Chapter 5.1.1 --- Extension of the DCCR algorithm --- p.71Chapter 5.1.2 --- Examination of the MED criterion --- p.72Chapter 5.1.3 --- Use of the FoE prior in postprocessing . . --- p.72Chapter 5.1.4 --- Investigation on the quantization noise model --- p.73Chapter 5.2 --- Future work --- p.73Chapter 5.2.1 --- Degradation model --- p.73Chapter 5.2.2 --- Efficient implementation of the MAP method --- p.74Chapter 5.2.3 --- Postprocessing of compressed video --- p.75Chapter A --- Detailed derivation of coefficient restoration --- p.76Chapter B --- Implementation details of the FoE prior --- p.81Chapter B.1 --- The FoE prior model --- p.81Chapter B.2 --- Energy function and its gradient --- p.83Chapter B.3 --- Conjugate gradient descent method --- p.84Bibliography --- p.8
    corecore