2,581 research outputs found

    深層学習に基づく画像圧縮と品質評価

    Get PDF
    早大学位記番号:新8427早稲田大

    Perceptual quality assessment and processing for visual signals.

    Get PDF
    視覺信號,包括圖像,視頻等,在采集,壓縮,存儲,傳輸,重新生成的過程中都會被各種各樣的噪聲所影響,因此他們的主觀質量也就會降低。所以,主觀視覺質量在現今的視覺信號處理跟通訊系統中起到了很大的作用。這篇畢業論文主要討論質量評價的算法設計,以及這些衡量標準在視覺信號處理上的應用。這篇論文的工作主要包括以下五個方面。第一部分主要集中在具有完全套考原始圖像的圖像質量評價。首先我們研究人類視覺系統的特征。具體說來,視覺在結構化失真上面的水平特性和顯著特征會被建模然后應用到結構相似度(SSIM)這個衡量標準上。實驗顯示我們的方法明顯的提高了衡量標準典主觀評價的相似度。由這個質量衡量標準的啟發,我們設計了一個主觀圖像壓縮的方法。其中我們提出了一個自適應的塊大小的超分辨率算法指導的下采樣的算法。實驗結果證明提出的圖像壓縮算法無論在主觀還是在客觀層面都構建了高質量的圖像。第二個部分的工作主要討論具有完全參考原始視頻的視頻質量評價。考慮到人類視覺系統的特征,比如時空域的對此敏感函數,眼球的移動,紋理的遮掩特性,空間域的一致性,時間域的協調性,不同塊變換的特性,我們設計了一個自適應塊大小的失真閾值的模型。實驗證明,我們提出的失真閾值模型能夠更精確的描迷人類視覺系統的特性。基于這個自適應塊大小的失真閾值模型,我們設計了一個簡單的主觀質量評價標準。在公共的圓像以及視頻的主觀數據庫上的測試結果證明了這個簡單的評價標準的有效性。因此,我們把這個簡單的質量標準應用于視頻編碼系統中。它可以在同樣的碼率下提供更高主觀質量的視頻。第三部分我們討論具有部分參考信息的圖像質量評價。我們通過描迷重組后的離散余弦變換域的系數的統計分布來衡量圖像的主觀質量。提出的評價標準發掘了相鄰的離散余弦系數的相同統計特性,相鄰的重組離散余弦系數的互信息,以及圖像的能量在不同頻率下的分布。實驗結果證明我們提出的質量標準河以超越其他的具有部分參考信息的質量評價標準,甚至還超過了具有完全參考信息的質量評價標準。而且,提取的特征很容易被編碼以及隱藏到圖像中以便于在圖像通訊中進行質量監控。第四部分我們討論具有部分參考信息的視頻質量評價。我們提取的特征可以很好的描迷空間域的信息失,和時間域的相鄰兩幀間的直方圖的統計特性。在視頻主觀質量的數據庫上的實驗結果,也證明了提出的方法河以超越其他代表性的視頻質量評價標準,甚至是具有完全參考信息的質量評價標準, 譬如PSNR以及SSIM 。我們的方法只需要很少的特征來描迷每一幀視頻圖像。對于每一幀圖像,一個特征用于描迷空間域的特點,另外三個特征用于描述時間域的特點。考慮到計算的復雜度以及壓縮特征所需要的碼率,提出的方法河以很簡單的在視頻的傳輸過程中監控視頻的質量。之前的四部分提到的主觀質量評價標準主要集中在傳統的失真上面, 譬如JPEG 圖像壓縮, H.264視頻壓縮。在最后一部分,我們討論在圖像跟視頻的retargeting過程中的失真。現如今,隨著消費者電子的發展,視覺信號需要在不同分辨率的顯示設備上進行通訊交互。因此, retargeting的算法把同一個原始圖像適應于不同的分辨率的顯示設備。這樣的過程就會引入圖像的失真。我們研究了對于retargeting圖像主觀質量的測試者的分數,從三個方面進行討論測試者對于retargeting圖像失真的反應.圖像retargeting的尺度,圖像retargeting的算法,原始圖像的內容特性。通過大量的主觀實驗測試,我們構建了一個關于圖像retargeting的主觀數據庫。基于這個主觀數據庫,我們評價以及分析了幾個具有代表性的質量評價標準。Visual signals, including images, videos, etc., are affected by a wide variety of distortions during acquisition, compression, storage, processing, transmission, and reproduction processes, which result in perceptual quality degradation. As a result, perceptual quality assessment plays a very important role in today's visual signal processing and communication systems. In this thesis, quality assessment algorithms for evaluating the visual signal perceptual quality, as well as the applications on visual signal processing and communications, are investigated. The work consists of five parts as briefly summarized below.The first part focuses on the full-reference (FR) image quality assessment. The properties of the human visual system (HVS) are firstly investigated. Specifically, the visual horizontal effect (HE) and saliency properties over the structural distortions are modelled and incorporated into the structure similarity index (SSIM). Experimental results show significantly improved performance in matching the subjective ratings. Inspired by the developed FR image metric, a perceptual image compression scheme is developed, where the adaptive block-based super-resolution directed down-sampling is proposed. Experimental results demonstrated that the proposed image compression scheme can produce higher quality images in terms of both objective and subjective qualities, compared with the existing methods.The second part concerns the FR video quality assessment. The adaptive block-size transform (ABT) based just-noticeable difference (JND) for visual signals is investigated by considering the HVS characteristics, e.g., spatio-temporal contrast sensitivity function (CSF), eye movement, texture masking, spatial coherence, temporal consistency, properties of different block-size transforms, etc. It is verified that the developed ABT based JND can more accurately depict the HVS property, compared with the state-of-the-art JND models. The ABT based JND is thereby utilized to develop a simple perceptual quality metric for visual signals. Validations on the image and video subjective quality databases proved its effectiveness. As a result, the developed perceptual quality metric is employed for perceptual video coding, which can deliver video sequences of higher perceptual quality at the same bit-rates.The third part discusses the reduced-reference (RR) image quality assessment, which is developed by statistically modelling the coe cient distribution in the reorganized discrete cosine transform (RDCT) domain. The proposed RR metric exploits the identical statistical nature of the adjacent DCT coefficients, the mutual information (MI) relationship between adjacent RDCT coefficients, and the image energy distribution among different frequency components. Experimental results demonstrate that the proposed metric outperforms the representative RR image quality metrics, and even the FR quality metric, i.e., peak signal to noise ratio (PSNR). Furthermore, the extracted RR features can be easily encoded and embedded into the distorted images for quality monitoring during image communications.The fourth part investigates the RR video quality assessment. The RR features are extracted to exploit the spatial information loss and the temporal statistical characteristics of the inter-frame histogram. Evaluations on the video subjective quality databases demonstrate that the proposed method outperforms the representative RR video quality metrics, and even the FR metrics, such as PSNR, SSIM in matching the subjective ratings. Furthermore, only a small number of RR features is required to represent the original video sequence (each frame requires only 1 and 3 parameters to depict the spatial and temporal characteristics, respectively). By considering the computational complexity and the bit-rates for extracting and representing the RR features, the proposed RR quality metric can be utilized for quality monitoring during video transmissions, where the RR features for perceptual quality analysis can be easily embedded into the videos or transmitted through an ancillary data channel.The aforementioned perceptual quality metrics focus on the traditional distortions, such as JPEG image compression noise, H.264 video compression noise, and so on. In the last part, we investigate the distortions introduced during the image and video retargeting process. Nowadays, with the development of the consumer electronics, more and more visual signals have to communicate between different display devices of different resolutions. The retargeting algorithm is employed to adapt a source image of one resolution to be displayed in a device of a different resolution, which may introduce distortions during the retargeting process. We investigate the subjective responses on the perceptual qualities of the retargeted images, and discuss the subjective results from three perspectives, i.e., retargeting scales, retargeting methods, and source image content attributes. An image retargeting subjective quality database is built by performing a large-scale subjective study of image retargeting quality on a collection of retargeted images. Based on the built database, several representative quality metrics for retargeted images are evaluated and discussed.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Ma, Lin."December 2012."Thesis (Ph.D.)--Chinese University of Hong Kong, 2013.Includes bibliographical references (leaves 185-197).Abstract also in Chinese.Dedication --- p.iiAcknowledgments --- p.iiiAbstract --- p.viiiPublications --- p.xiNomenclature --- p.xviiContents --- p.xxivList of Figures --- p.xxviiiList of Tables --- p.xxxChapter 1 --- Introduction --- p.1Chapter 1.1 --- Motivation and Objectives --- p.1Chapter 1.2 --- Subjective Perceptual Quality Assessment --- p.5Chapter 1.3 --- Objective Perceptual Quality Assessment --- p.10Chapter 1.3.1 --- Visual Modelling Approach --- p.10Chapter 1.3.2 --- Engineering Modelling Approach --- p.15Chapter 1.3.3 --- Perceptual Subjective Quality Databases --- p.19Chapter 1.3.4 --- Performance Evaluation --- p.21Chapter 1.4 --- Thesis Contributions --- p.22Chapter 1.5 --- Organization of the Thesis --- p.24Chapter I --- Full Reference Quality Assessment --- p.26Chapter 2 --- Full Reference Image Quality Assessment --- p.27Chapter 2.1 --- Visual Horizontal Effect for Image Quality Assessment --- p.27Chapter 2.1.1 --- Introduction --- p.27Chapter 2.1.2 --- Proposed Image Quality Assessment Framework --- p.28Chapter 2.1.3 --- Experimental Results --- p.34Chapter 2.1.4 --- Conclusion --- p.36Chapter 2.2 --- Image Compression via Adaptive Block-Based Super-Resolution Directed Down-Sampling --- p.37Chapter 2.2.1 --- Introduction --- p.37Chapter 2.2.2 --- The Proposed Image Compression Framework --- p.38Chapter 2.2.3 --- Experimental Results --- p.42Chapter 2.2.4 --- Conclusion --- p.45Chapter 3 --- Full Reference Video Quality Assessment --- p.46Chapter 3.1 --- Adaptive Block-size Transform based Just-Noticeable Dfference Model for Visual Signals --- p.46Chapter 3.1.1 --- Introduction --- p.46Chapter 3.1.2 --- JND Model based on Transforms of Different Block Sizes --- p.48Chapter 3.1.3 --- Selection Strategy Between Transforms of Different Block Sizes --- p.53Chapter 3.1.4 --- JND Model Evaluation --- p.56Chapter 3.1.5 --- Conclusion --- p.60Chapter 3.2 --- Perceptual Quality Assessment --- p.60Chapter 3.2.1 --- Experimental Results --- p.62Chapter 3.2.2 --- Conclusion --- p.64Chapter 3.3 --- Motion Trajectory Based Visual Saliency for Video Quality Assessment --- p.65Chapter 3.3.1 --- Motion Trajectory based Visual Saliency for VQA --- p.66Chapter 3.3.2 --- New Quaternion Representation (QR) for Each frame --- p.66Chapter 3.3.3 --- Saliency Map Construction by QR --- p.67Chapter 3.3.4 --- Incorporating Visual Saliency with VQAs --- p.68Chapter 3.3.5 --- Experimental Results --- p.69Chapter 3.3.6 --- Conclusion --- p.72Chapter 3.4 --- Perceptual Video Coding --- p.72Chapter 3.4.1 --- Experimental Results --- p.75Chapter 3.4.2 --- Conclusion --- p.76Chapter II --- Reduced Reference Quality Assessment --- p.77Chapter 4 --- Reduced Reference Image Quality Assessment --- p.78Chapter 4.1 --- Introduction --- p.78Chapter 4.2 --- Reorganization Strategy of DCT Coefficients --- p.81Chapter 4.3 --- Relationship Analysis of Intra and Inter RDCT subbands --- p.83Chapter 4.4 --- Reduced Reference Feature Extraction in Sender Side --- p.88Chapter 4.4.1 --- Intra RDCT Subband Modeling --- p.89Chapter 4.4.2 --- Inter RDCT Subband Modeling --- p.91Chapter 4.4.3 --- Image Frequency Feature --- p.92Chapter 4.5 --- Perceptual Quality Analysis in the Receiver Side --- p.95Chapter 4.5.1 --- Intra RDCT Feature Difference Analysis --- p.95Chapter 4.5.2 --- Inter RDCT Feature Difference Analysis --- p.96Chapter 4.5.3 --- Image Frequency Feature Difference Analysis --- p.96Chapter 4.6 --- Experimental Results --- p.98Chapter 4.6.1 --- Efficiency of the DCT Reorganization Strategy --- p.98Chapter 4.6.2 --- Performance of the Proposed RR IQA --- p.99Chapter 4.6.3 --- Performance of the Proposed RR IQA over Each Individual Distortion Type --- p.105Chapter 4.6.4 --- Statistical Significance --- p.107Chapter 4.6.5 --- Performance Analysis of Each Component --- p.109Chapter 4.7 --- Conclusion --- p.111Chapter 5 --- Reduced Reference Video Quality Assessment --- p.113Chapter 5.1 --- Introduction --- p.113Chapter 5.2 --- Proposed Reduced Reference Video Quality Metric --- p.114Chapter 5.2.1 --- Reduced Reference Feature Extraction from Spatial Perspective --- p.116Chapter 5.2.2 --- Reduced Reference Feature Extraction from Temporal Perspective --- p.118Chapter 5.2.3 --- Visual Quality Analysis in Receiver Side --- p.121Chapter 5.3 --- Experimental Results --- p.123Chapter 5.3.1 --- Consistency Test of the Proposed RR VQA over Compressed Video Sequences --- p.124Chapter 5.3.2 --- Consistency Test of the Proposed RR VQA over Video Sequences with Simulated Distortions --- p.126Chapter 5.3.3 --- Performance Evaluation of the Proposed RR VQA on Compressed Video Sequences --- p.129Chapter 5.3.4 --- Performance Evaluation of the Proposed RR VQA on Video Sequences Containing Transmission Distortions --- p.133Chapter 5.3.5 --- Performance Analysis of Each Component --- p.135Chapter 5.4 --- Conclusion --- p.137Chapter III --- Retargeted Visual Signal Quality Assessment --- p.138Chapter 6 --- Image Retargeting Perceptual Quality Assessment --- p.139Chapter 6.1 --- Introduction --- p.139Chapter 6.2 --- Preparation of Database Building --- p.142Chapter 6.2.1 --- Source Image --- p.142Chapter 6.2.2 --- Retargeting Methods --- p.143Chapter 6.2.3 --- Subjective Testing --- p.146Chapter 6.3 --- Data Processing and Analysis for the Database --- p.150Chapter 6.3.1 --- Processing of Subjective Ratings --- p.150Chapter 6.3.2 --- Analysis and Discussion of the Subjective Ratings --- p.153Chapter 6.4 --- Objective Quality Metric for Retargeted Images --- p.162Chapter 6.4.1 --- Quality Metric Performances on the Constructed Image Retargeting Database --- p.162Chapter 6.4.2 --- Subjective Analysis of the Shape Distortion and Content Information Loss --- p.165Chapter 6.4.3 --- Discussion --- p.167Chapter 6.5 --- Conclusion --- p.169Chapter 7 --- Conclusions --- p.170Chapter 7.1 --- Conclusion --- p.170Chapter 7.2 --- Future Work --- p.173Chapter A --- Attributes of the Source Image --- p.176Chapter B --- Retargeted Image Name and the Corresponding Number --- p.179Chapter C --- Source Image Name and the Corresponding Number --- p.183Bibliography --- p.18

    Video modeling via implicit motion representations

    Get PDF
    Video modeling refers to the development of analytical representations for explaining the intensity distribution in video signals. Based on the analytical representation, we can develop algorithms for accomplishing particular video-related tasks. Therefore video modeling provides us a foundation to bridge video data and related-tasks. Although there are many video models proposed in the past decades, the rise of new applications calls for more efficient and accurate video modeling approaches.;Most existing video modeling approaches are based on explicit motion representations, where motion information is explicitly expressed by correspondence-based representations (i.e., motion velocity or displacement). Although it is conceptually simple, the limitations of those representations and the suboptimum of motion estimation techniques can degrade such video modeling approaches, especially for handling complex motion or non-ideal observation video data. In this thesis, we propose to investigate video modeling without explicit motion representation. Motion information is implicitly embedded into the spatio-temporal dependency among pixels or patches instead of being explicitly described by motion vectors.;Firstly, we propose a parametric model based on a spatio-temporal adaptive localized learning (STALL). We formulate video modeling as a linear regression problem, in which motion information is embedded within the regression coefficients. The coefficients are adaptively learned within a local space-time window based on LMMSE criterion. Incorporating a spatio-temporal resampling and a Bayesian fusion scheme, we can enhance the modeling capability of STALL on more general videos. Under the framework of STALL, we can develop video processing algorithms for a variety of applications by adjusting model parameters (i.e., the size and topology of model support and training window). We apply STALL on three video processing problems. The simulation results show that motion information can be efficiently exploited by our implicit motion representation and the resampling and fusion do help to enhance the modeling capability of STALL.;Secondly, we propose a nonparametric video modeling approach, which is not dependent on explicit motion estimation. Assuming the video sequence is composed of many overlapping space-time patches, we propose to embed motion-related information into the relationships among video patches and develop a generic sparsity-based prior for typical video sequences. First, we extend block matching to more general kNN-based patch clustering, which provides an implicit and distributed representation for motion information. We propose to enforce the sparsity constraint on a higher-dimensional data array signal, which is generated by packing the patches in the similar patch set. Then we solve the inference problem by updating the kNN array and the wanted signal iteratively. Finally, we present a Bayesian fusion approach to fuse multiple-hypothesis inferences. Simulation results in video error concealment, denoising, and deartifacting are reported to demonstrate its modeling capability.;Finally, we summarize the proposed two video modeling approaches. We also point out the perspectives of implicit motion representations in applications ranging from low to high level problems

    Super-resolving Compressed Images via Parallel and Series Integration of Artifact Reduction and Resolution Enhancement

    Full text link
    In this paper, we propose a novel compressed image super resolution (CISR) framework based on parallel and series integration of artifact removal and resolution enhancement. Based on maximum a posterior inference for estimating a clean low-resolution (LR) input image and a clean high resolution (HR) output image from down-sampled and compressed observations, we have designed a CISR architecture consisting of two deep neural network modules: the artifact reduction module (ARM) and resolution enhancement module (REM). ARM and REM work in parallel with both taking the compressed LR image as their inputs, while they also work in series with REM taking the output of ARM as one of its inputs and ARM taking the output of REM as its other input. A unique property of our CSIR system is that a single trained model is able to super-resolve LR images compressed by different methods to various qualities. This is achieved by exploiting deep neural net-works capacity for handling image degradations, and the parallel and series connections between ARM and REM to reduce the dependency on specific degradations. ARM and REM are trained simultaneously by the deep unfolding technique. Experiments are conducted on a mixture of JPEG and WebP compressed images without a priori knowledge of the compression type and com-pression factor. Visual and quantitative comparisons demonstrate the superiority of our method over state-of-the-art super resolu-tion methods.Code link: https://github.com/luohongming/CISR_PS

    Revealing More Details: Image Super-Resolution for Real-World Applications

    Get PDF

    Perceptual modelling for 2D and 3D

    Get PDF
    Livrable D1.1 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D1.1 du projet

    Defect-aware Super-resolution Thermography by Adversarial Learning

    Get PDF
    nfrared thermography is a valuable non-destructive tool for inspection of materials. It measures the surface temperature evolution, from which hidden defects may be detected. Yet, thermal cameras typically have a low native spatial resolution resulting in a blurry and low-quality thermal image sequence and videos. In this study, a novel adversarial deep learning framework, called Dual-IRT-GAN, is proposed for performing super-resolution tasks. The proposed Dual-IRT-GAN attempts to achieve the objective of improving local texture details, as well as highlighting defective regions. Technically speaking, the proposed model consists of two modules SEGnet and SRnet that carry out defect detection and super resolution tasks, respectively. By leveraging the defect information from SEGnet, SRnet is capable of generating plausible high-resolution thermal images with an enhanced focus on defect regions. The generated high-resolution images are then delivered to the discriminator for adversarial training using GAN's framework. The proposed Dual-IRT-GAN model, which is trained on an exclusive virtual dataset, is demonstrated on experimental thermographic data obtained from fiber reinforced polymers having a variety of defect types, sizes, and depths. The obtained results show its high performance in maintaining background color consistency and removing undesired noise, and in highlighting defect zones with finer detailed textures in high-resolution
    corecore