7 research outputs found

    A learning-by-example method for reducing BDCT compression artifacts in high-contrast images.

    Get PDF
    Wang, Guangyu.Thesis submitted in: December 2003.Thesis (M.Phil.)--Chinese University of Hong Kong, 2004.Includes bibliographical references (leaves 70-75).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- BDCT Compression Artifacts --- p.1Chapter 1.2 --- Previous Artifact Removal Methods --- p.3Chapter 1.3 --- Our Method --- p.4Chapter 1.4 --- Structure of the Thesis --- p.4Chapter 2 --- Related Work --- p.6Chapter 2.1 --- Image Compression --- p.6Chapter 2.2 --- A Typical BDCT Compression: Baseline JPEG --- p.7Chapter 2.3 --- Existing Artifact Removal Methods --- p.10Chapter 2.3.1 --- Post-Filtering --- p.10Chapter 2.3.2 --- Projection onto Convex Sets --- p.12Chapter 2.3.3 --- Learning by Examples --- p.13Chapter 2.4 --- Other Related Work --- p.14Chapter 3 --- Contamination as Markov Random Field --- p.17Chapter 3.1 --- Markov Random Field --- p.17Chapter 3.2 --- Contamination as MRF --- p.18Chapter 4 --- Training Set Preparation --- p.22Chapter 4.1 --- Training Images Selection --- p.22Chapter 4.2 --- Bit Rate --- p.23Chapter 5 --- Artifact Vectors --- p.26Chapter 5.1 --- Formation of Artifact Vectors --- p.26Chapter 5.2 --- Luminance Remapping --- p.29Chapter 5.3 --- Dominant Implication --- p.29Chapter 6 --- Tree-Structured Vector Quantization --- p.32Chapter 6.1 --- Background --- p.32Chapter 6.1.1 --- Vector Quantization --- p.32Chapter 6.1.2 --- Tree-Structured Vector Quantization --- p.33Chapter 6.1.3 --- K-Means Clustering --- p.34Chapter 6.2 --- TSVQ in Artifact Removal --- p.35Chapter 7 --- Synthesis --- p.39Chapter 7.1 --- Color Processing --- p.39Chapter 7.2 --- Artifact Removal --- p.40Chapter 7.3 --- Selective Rejection of Synthesized Values --- p.42Chapter 8 --- Experimental Results --- p.48Chapter 8.1 --- Image Quality Assessments --- p.48Chapter 8.1.1 --- Peak Signal-Noise Ratio --- p.48Chapter 8.1.2 --- Mean Structural SIMilarity --- p.49Chapter 8.2 --- Performance --- p.50Chapter 8.3 --- How Size of Training Set Affects the Performance --- p.52Chapter 8.4 --- How Bit Rates Affect the Performance --- p.54Chapter 8.5 --- Comparisons --- p.56Chapter 9 --- Conclusion --- p.61Chapter A --- Color Transformation --- p.63Chapter B --- Image Quality --- p.64Chapter B.1 --- Image Quality vs. Quantization Table --- p.64Chapter B.2 --- Image Quality vs. Bit Rate --- p.66Chapter C --- Arti User's Manual --- p.68Bibliography --- p.7

    Removal Of Blocking Artifacts From JPEG-Compressed Images Using An Adaptive Filtering Algorithm

    Get PDF
    The aim of this research was to develop an algorithm that will produce a considerable improvement in the quality of JPEG images, by removing blocking and ringing artifacts, irrespective of the level of compression present in the image. We review multiple published related works, and finally present a computationally efficient algorithm for reducing the blocky and Gibbs oscillation artifacts commonly present in JPEG compressed images. The algorithm alpha-blends a smoothed version of the image with the original image; however, the blending is controlled by a limit factor that considers the amount of compression present and any local edge information derived from the application of a Prewitt filter. In addition, the actual value of the blending coefficient (α) is derived from the local Mean Structural Similarity Index Measure (MSSIM) which is also adjusted by a factor that also considers the amount of compression present. We also present our results as well as the results for a variety of other papers whose authors used other post compression filtering methods

    Removal Of Blocking Artifacts From JPEG-Compressed Images Using Neural Network

    Get PDF
    The goal of this research was to develop a neural network that will produce considerable improvement in the quality of JPEG compressed images, irrespective of compression level present in the images. In order to develop a computationally efficient algorithm for reducing blocky and Gibbs oscillation artifacts from JPEG compressed images, we integrated artificial intelligence to remove blocky and Gibbs oscillation artifacts. In this approach, alpha blend filter [7] was used to post process JPEG compressed images to reduce noise and artifacts without losing image details. Here alpha blending was controlled by a limit factor that considers the amount of compression present, and any local information derived from Prewitt filter application in the input JPEG image. The outcome of modified alpha blend was improved by a trained neural network and compared with various other published works [7][9][11][14][20][23][30][32][33][35][37] where authors used post compression filtering methods

    Edge-enhancing image smoothing.

    Get PDF
    Xu, Yi.Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.Includes bibliographical references (p. 62-69).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Organization --- p.4Chapter 2 --- Background and Motivation --- p.7Chapter 2.1 --- ID Mondrian Smoothing --- p.9Chapter 2.2 --- 2D Formulation --- p.13Chapter 3 --- Solver --- p.16Chapter 3.1 --- More Analysis --- p.20Chapter 4 --- Edge Extraction --- p.26Chapter 4.1 --- Related work --- p.26Chapter 4.2 --- Method and Results --- p.28Chapter 4.3 --- Summary --- p.32Chapter 5 --- Image Abstraction and Pencil Sketching --- p.35Chapter 5.1 --- Related Work --- p.35Chapter 5.2 --- Method and Results --- p.36Chapter 5.3 --- Summary --- p.40Chapter 6 --- Clip-Art Compression Artifact Removal --- p.41Chapter 6.1 --- Related work --- p.41Chapter 6.2 --- Method and Results --- p.43Chapter 6.3 --- Summary --- p.46Chapter 7 --- Layer-Based Contrast Manipulation --- p.49Chapter 7.1 --- Related Work --- p.49Chapter 7.2 --- Method and Results --- p.50Chapter 7.2.1 --- Edge Adjustment --- p.51Chapter 7.2.2 --- Detail Magnification --- p.54Chapter 7.2.3 --- Tone Mapping --- p.55Chapter 7.3 --- Summary --- p.56Chapter 8 --- Conclusion and Discussion --- p.59Bibliography --- p.6

    Learning-based Wavelet-like Transforms For Fully Scalable and Accessible Image Compression

    Full text link
    The goal of this thesis is to improve the existing wavelet transform with the aid of machine learning techniques, so as to enhance coding efficiency of wavelet-based image compression frameworks, such as JPEG 2000. In this thesis, we first propose to augment the conventional base wavelet transform with two additional learned lifting steps -- a high-to-low step followed by a low-to-high step. The high-to-low step suppresses aliasing in the low-pass band by using the detail bands at the same resolution, while the low-to-high step aims to further remove redundancy from detail bands by using the corresponding low-pass band. These two additional steps reduce redundancy (notably aliasing information) amongst the wavelet subbands, and also improve the visual quality of reconstructed images at reduced resolutions. To train these two networks in an end-to-end fashion, we develop a backward annealing approach to overcome the non-differentiability of the quantization and cost functions during back-propagation. Importantly, the two additional networks share a common architecture, named a proposal-opacity topology, which is inspired and guided by a specific theoretical argument related to geometric flow. This particular network topology is compact and with limited non-linearities, allowing a fully scalable system; one pair of trained network parameters are applied for all levels of decomposition and for all bit-rates of interest. By employing the additional lifting networks within the JPEG2000 image coding standard, we can achieve up to 17.4% average BD bit-rate saving over a wide range of bit-rates, while retaining the quality and resolution scalability features of JPEG2000. Built upon the success of the high-to-low and low-to-high steps, we then study more broadly the extension of neural networks to all lifting steps that correspond to the base wavelet transform. The purpose of this comprehensive study is to understand what is the most effective way to develop learned wavelet-like transforms for highly scalable and accessible image compression. Specifically, we examine the impact of the number of learned lifting steps, the number of layers and the number of channels in each learned lifting network, and kernel support in each layer. To facilitate the study, we develop a generic training methodology that is simultaneously appropriate to all lifting structures considered. Experimental results ultimately suggest that to improve the existing wavelet transform, it is more profitable to augment a larger wavelet transform with more diverse high-to-low and low-to-high steps, rather than developing deep fully learned lifting structures
    corecore