29 research outputs found

    Removal Of Blocking Artifacts From JPEG-Compressed Images Using An Adaptive Filtering Algorithm

    Get PDF
    The aim of this research was to develop an algorithm that will produce a considerable improvement in the quality of JPEG images, by removing blocking and ringing artifacts, irrespective of the level of compression present in the image. We review multiple published related works, and finally present a computationally efficient algorithm for reducing the blocky and Gibbs oscillation artifacts commonly present in JPEG compressed images. The algorithm alpha-blends a smoothed version of the image with the original image; however, the blending is controlled by a limit factor that considers the amount of compression present and any local edge information derived from the application of a Prewitt filter. In addition, the actual value of the blending coefficient (α) is derived from the local Mean Structural Similarity Index Measure (MSSIM) which is also adjusted by a factor that also considers the amount of compression present. We also present our results as well as the results for a variety of other papers whose authors used other post compression filtering methods

    A new approach for restoring block-transform coded images with estimation of correlation matrices

    Get PDF
    Version of RecordPublishe

    Optimizing MPEG-4 coding performance by taking post-processing into account

    Get PDF
    Centre for Multimedia Signal Processing, Department of Electronic and Information EngineeringRefereed conference paper2000-2001 > Academic research: refereed > Refereed conference paperVersion of RecordPublishe

    Reduction of blocking artifacts using side information

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 95-96).Block-based image and video coding systems are used extensively in practice. In low bit-rate applications, however, they suffer from annoying discontinuities, called blocking artifacts. Prior research shows that incorporating systems that reduce blocking artifacts into codecs is useful because visual quality is improved. Existing methods reduce blocking artifacts by applying various post-processing techniques to the compressed image. Such methods require neither any modification to current encoders nor an increase in the bit-rate. This thesis examines a framework where blocking artifacts are reduced using side information transmitted from the encoder to the decoder. Using side information enables the use of the original image in deblocking, which improves performance. Furthermore, the computational burden at the decoder is reduced. The principal question that arises is whether the gains in performance of this choice can compensate for the increase in the bit-rate due to the transmission of side information. Experiments are carried out to answer this question with the following sample system: The encoder determines block boundaries that exhibit blocking artifacts as well as filters (from a predefined set of filters) that best deblock these block boundaries.(cont.) Then it transmits side information that conveys the determined block boundaries together with their selected filters to the decoder. The decoder uses the received side information to perform deblocking. The proposed sample system is compared against an ordinary coding system and a post-processing type deblocking system with the bit-rate of these systems being equal to the overall bit-rate (regular encoding bits + side information bits) of the proposed system. The results of the comparisons indicate that, both for images and video sequences, the proposed system can perform better in terms of both visual quality and PSNR for some range of coding bit-rates.by Fatih Kamisli.S.M

    Improving MPEG-4 coding performance by jointly optimising compression and blocking effect elimination

    Get PDF
    Centre for Multimedia Signal Processing, Department of Electronic and Information EngineeringAccepted ManuscriptPublishe

    Postprocessing of images coded using block DCT at low bit rates.

    Get PDF
    Sun, Deqing.Thesis (M.Phil.)--Chinese University of Hong Kong, 2007.Includes bibliographical references (leaves 86-91).Abstracts in English and Chinese.Abstract --- p.i摘要 --- p.iiiContributions --- p.ivAcknowledgement --- p.viAbbreviations --- p.xviiiNotations --- p.xxiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Image compression and postprocessing --- p.1Chapter 1.2 --- A brief review of postprocessing --- p.3Chapter 1.3 --- Objective and methodology of the research --- p.7Chapter 1.4 --- Thesis organization --- p.8Chapter 1.5 --- A note on publication --- p.9Chapter 2 --- Background Study --- p.11Chapter 2.1 --- Image models --- p.11Chapter 2.1.1 --- Minimum edge difference (MED) criterion for block boundaries --- p.12Chapter 2.1.2 --- van Beek's edge model for an edge --- p.15Chapter 2.1.3 --- Fields of experts (FoE) for an image --- p.16Chapter 2.2 --- Degradation models --- p.20Chapter 2.2.1 --- Quantization constraint set (QCS) and uniform noise --- p.21Chapter 2.2.2 --- Narrow quantization constraint set (NQCS) --- p.22Chapter 2.2.3 --- Gaussian noise --- p.22Chapter 2.2.4 --- Edge width enlargement after quantization --- p.25Chapter 2.3 --- Use of these models for postprocessing --- p.27Chapter 2.3.1 --- MED and edge models --- p.27Chapter 2.3.2 --- The FoE prior model --- p.27Chapter 3 --- Postprocessing using MED and edge models --- p.28Chapter 3.1 --- Blocking artifacts suppression by coefficient restoration --- p.29Chapter 3.1.1 --- AC coefficient restoration by MED --- p.29Chapter 3.1.2 --- General derivation --- p.31Chapter 3.2 --- Detailed algorithm --- p.34Chapter 3.2.1 --- Edge identification --- p.36Chapter 3.2.2 --- Region classification --- p.36Chapter 3.2.3 --- Edge reconstruction --- p.37Chapter 3.2.4 --- Image reconstruction --- p.37Chapter 3.3 --- Experimental results --- p.38Chapter 3.3.1 --- Results of the proposed method --- p.38Chapter 3.3.2 --- Comparison with one wavelet-based method --- p.39Chapter 3.4 --- On the global minimum of the edge difference . . --- p.41Chapter 3.4.1 --- The constrained minimization problem . . --- p.41Chapter 3.4.2 --- Experimental examination --- p.42Chapter 3.4.3 --- Discussions --- p.43Chapter 3.5 --- Conclusions --- p.44Chapter 4 --- Postprocessing by the MAP criterion using FoE --- p.49Chapter 4.1 --- The proposed method --- p.49Chapter 4.1.1 --- The MAP criterion --- p.49Chapter 4.1.2 --- The optimization problem --- p.51Chapter 4.2 --- Experimental results --- p.52Chapter 4.2.1 --- Setting algorithm parameters --- p.53Chapter 4.2.2 --- Results --- p.56Chapter 4.3 --- Investigation on the quantization noise model . . --- p.58Chapter 4.4 --- Conclusions --- p.61Chapter 5 --- Conclusion --- p.71Chapter 5.1 --- Contributions --- p.71Chapter 5.1.1 --- Extension of the DCCR algorithm --- p.71Chapter 5.1.2 --- Examination of the MED criterion --- p.72Chapter 5.1.3 --- Use of the FoE prior in postprocessing . . --- p.72Chapter 5.1.4 --- Investigation on the quantization noise model --- p.73Chapter 5.2 --- Future work --- p.73Chapter 5.2.1 --- Degradation model --- p.73Chapter 5.2.2 --- Efficient implementation of the MAP method --- p.74Chapter 5.2.3 --- Postprocessing of compressed video --- p.75Chapter A --- Detailed derivation of coefficient restoration --- p.76Chapter B --- Implementation details of the FoE prior --- p.81Chapter B.1 --- The FoE prior model --- p.81Chapter B.2 --- Energy function and its gradient --- p.83Chapter B.3 --- Conjugate gradient descent method --- p.84Bibliography --- p.8

    Consistent Image Decoding from Multiple Lossy Versions

    Get PDF
    With the recent development of tools for data sharing in social networks and peer to peer networks, the same information is often stored in different nodes. Peer-to-peer protocols usually allow one user to collect portions of the same file from different nodes in the network, substantially improving the rate at which data are received by the end user. In some cases, however, the same multimedia document is available in different lossy versions on the network nodes. In such situations, one may be interested in collecting all available versions of the same document and jointly decoding them to obtain a better reconstruction of the original. In this paper we study some methods to jointly decode different versions of the same image. We compare different uses of the method of Projections Onto Convex Sets (POCS) with some Convex Optimization techniques in order to reconstruct an image for which JPEG and JPEG2000 lossy versions are available

    Entropy coding and post-processing for image and video coding.

    Get PDF
    Fong, Yiu Leung.Thesis (M.Phil.)--Chinese University of Hong Kong, 2010.Includes bibliographical references (leaves 83-87).Abstracts in English and Chinese.Abstract --- p.2Acknowledgement --- p.6Chapter 1. --- Introduction --- p.9Chapter 2. --- Background and Motivation --- p.10Chapter 2.1 --- Context-Based Arithmetic Coding --- p.10Chapter 2.2 --- Video Post-processing --- p.13Chapter 3. --- Context-Based Arithmetic Coding for JPEG --- p.16Chapter 3.1 --- Introduction --- p.16Chapter 3.1.1 --- Huffman Coding --- p.16Chapter 3.1.1.1 --- Introduction --- p.16Chapter 3.1.1.2 --- Concept --- p.16Chapter 3.1.1.3 --- Drawbacks --- p.18Chapter 3.1.2 --- Context-Based Arithmetic Coding --- p.19Chapter 3.1.2.1 --- Introduction --- p.19Chapter 3.1.2.2 --- Concept --- p.20Chapter 3.2 --- Proposed Method --- p.30Chapter 3.2.1 --- Introduction --- p.30Chapter 3.2.2 --- Redundancy in Quantized DCT Coefficients --- p.32Chapter 3.2.2.1 --- Zig-Zag Scanning Position --- p.32Chapter 3.2.2.2 --- Magnitudes of Previously Coded Coefficients --- p.41Chapter 3.2.3 --- Proposed Scheme --- p.43Chapter 3.2.3.1 --- Overview --- p.43Chapter 3.2.3.2 --- Preparation of Coding --- p.44Chapter 3.2.3.3 --- Coding of Non-zero Coefficient Flags and EOB Decisions --- p.45Chapter 3.2.3.4 --- Coding of ´بLEVEL' --- p.48Chapter 3.2.3.5 --- Separate Coding of Color Planes --- p.53Chapter 3.3 --- Experimental Results --- p.54Chapter 3.3.1 --- Evaluation Method --- p.54Chapter 3.3.2 --- Methods under Evaluation --- p.55Chapter 3.3.3 --- Average File Size Reduction --- p.57Chapter 3.3.4 --- File Size Reduction on Individual Images --- p.59Chapter 3.3.5 --- Performance of Individual Techniques --- p.63Chapter 3.4 --- Discussions --- p.66Chapter 4. --- Video Post-processing for H.264 --- p.67Chapter 4.1 --- Introduction --- p.67Chapter 4.2 --- Proposed Method --- p.68Chapter 4.3 --- Experimental Results --- p.69Chapter 4.3.1 --- Deblocking on Compressed Frames --- p.69Chapter 4.3.2 --- Deblocking on Residue of Compressed Frames --- p.72Chapter 4.3.3 --- Performance Investigation --- p.74Chapter 4.3.4 --- Investigation Experiment 1 --- p.75Chapter 4.3.5 --- Investigation Experiment 2 --- p.77Chapter 4.3.6 --- Investigation Experiment 3 --- p.79Chapter 4.4 --- Discussions --- p.81Chapter 5. --- Conclusions --- p.82References --- p.8
    corecore