100 research outputs found

    Scalable Remote Rendering using Synthesized Image Quality Assessment

    Get PDF
    Depth-image-based rendering (DIBR) is widely used to support 3D interactive graphics on low-end mobile devices. Although it reduces the rendering cost on a mobile device, it essentially turns such a cost into depth image transmission cost or bandwidth consumption, inducing performance bottleneck to a remote rendering system. To address this problem, we design a scalable remote rendering framework based on synthesized image quality assessment. Specially, we design an efficient synthesized image quality metric based on Just Noticeable Distortion (JND), properly measuring human perceived geometric distortions in synthesized images. Based on this, we predict quality-aware reference viewpoints, with viewpoint intervals optimized by the JND-based metric. An adaptive transmission scheme is also developed to control depth image transmission based on perceived quality and network bandwidth availability. Experiment results show that our approach effectively reduces transmission frequency and network bandwidth consumption with perceived quality on mobile devices maintained. A prototype system is implemented to demonstrate the scalability of our proposed framework to multiple clients

    State of the art in 2D content representation and compression

    Get PDF
    Livrable D1.3 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D3.1 du projet

    Perceptual modelling for 2D and 3D

    Get PDF
    Livrable D1.1 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D1.1 du projet

    Saliency-Enabled Coding Unit Partitioning and Quantization Control for Versatile Video Coding

    Get PDF
    The latest video coding standard, versatile video coding (VVC), has greatly improved coding efficiency over its predecessor standard high efficiency video coding (HEVC), but at the expense of sharply increased complexity. In the context of perceptual video coding (PVC), the visual saliency model that utilizes the characteristics of the human visual system to improve coding efficiency has become a reliable method due to advances in computer performance and visual algorithms. In this paper, a novel VVC optimization scheme compliant PVC framework is proposed, which consists of fast coding unit (CU) partition algorithm and quantization control algorithm. Firstly, based on the visual saliency model, we proposed a fast CU division scheme, including the redetermination of the CU division depth by calculating Scharr operator and variance, as well as the executive decision for intra sub-partitions (ISP), to reduce the coding complexity. Secondly, a quantization control algorithm is proposed by adjusting the quantization parameter based on multi-level classification of saliency values at the CU level to reduce the bitrate. In comparison with the reference model, experimental results indicate that the proposed method can reduce about 47.19% computational complexity and achieve a bitrate saving of 3.68% on average. Meanwhile, the proposed algorithm has reasonable peak signal-to-noise ratio losses and nearly the same subjective perceptual quality

    Video Quality Metrics

    Get PDF

    Perceptual quality assessment and processing for visual signals.

    Get PDF
    視覺信號,包括圖像,視頻等,在采集,壓縮,存儲,傳輸,重新生成的過程中都會被各種各樣的噪聲所影響,因此他們的主觀質量也就會降低。所以,主觀視覺質量在現今的視覺信號處理跟通訊系統中起到了很大的作用。這篇畢業論文主要討論質量評價的算法設計,以及這些衡量標準在視覺信號處理上的應用。這篇論文的工作主要包括以下五個方面。第一部分主要集中在具有完全套考原始圖像的圖像質量評價。首先我們研究人類視覺系統的特征。具體說來,視覺在結構化失真上面的水平特性和顯著特征會被建模然后應用到結構相似度(SSIM)這個衡量標準上。實驗顯示我們的方法明顯的提高了衡量標準典主觀評價的相似度。由這個質量衡量標準的啟發,我們設計了一個主觀圖像壓縮的方法。其中我們提出了一個自適應的塊大小的超分辨率算法指導的下采樣的算法。實驗結果證明提出的圖像壓縮算法無論在主觀還是在客觀層面都構建了高質量的圖像。第二個部分的工作主要討論具有完全參考原始視頻的視頻質量評價。考慮到人類視覺系統的特征,比如時空域的對此敏感函數,眼球的移動,紋理的遮掩特性,空間域的一致性,時間域的協調性,不同塊變換的特性,我們設計了一個自適應塊大小的失真閾值的模型。實驗證明,我們提出的失真閾值模型能夠更精確的描迷人類視覺系統的特性。基于這個自適應塊大小的失真閾值模型,我們設計了一個簡單的主觀質量評價標準。在公共的圓像以及視頻的主觀數據庫上的測試結果證明了這個簡單的評價標準的有效性。因此,我們把這個簡單的質量標準應用于視頻編碼系統中。它可以在同樣的碼率下提供更高主觀質量的視頻。第三部分我們討論具有部分參考信息的圖像質量評價。我們通過描迷重組后的離散余弦變換域的系數的統計分布來衡量圖像的主觀質量。提出的評價標準發掘了相鄰的離散余弦系數的相同統計特性,相鄰的重組離散余弦系數的互信息,以及圖像的能量在不同頻率下的分布。實驗結果證明我們提出的質量標準河以超越其他的具有部分參考信息的質量評價標準,甚至還超過了具有完全參考信息的質量評價標準。而且,提取的特征很容易被編碼以及隱藏到圖像中以便于在圖像通訊中進行質量監控。第四部分我們討論具有部分參考信息的視頻質量評價。我們提取的特征可以很好的描迷空間域的信息失,和時間域的相鄰兩幀間的直方圖的統計特性。在視頻主觀質量的數據庫上的實驗結果,也證明了提出的方法河以超越其他代表性的視頻質量評價標準,甚至是具有完全參考信息的質量評價標準, 譬如PSNR以及SSIM 。我們的方法只需要很少的特征來描迷每一幀視頻圖像。對于每一幀圖像,一個特征用于描迷空間域的特點,另外三個特征用于描述時間域的特點。考慮到計算的復雜度以及壓縮特征所需要的碼率,提出的方法河以很簡單的在視頻的傳輸過程中監控視頻的質量。之前的四部分提到的主觀質量評價標準主要集中在傳統的失真上面, 譬如JPEG 圖像壓縮, H.264視頻壓縮。在最后一部分,我們討論在圖像跟視頻的retargeting過程中的失真。現如今,隨著消費者電子的發展,視覺信號需要在不同分辨率的顯示設備上進行通訊交互。因此, retargeting的算法把同一個原始圖像適應于不同的分辨率的顯示設備。這樣的過程就會引入圖像的失真。我們研究了對于retargeting圖像主觀質量的測試者的分數,從三個方面進行討論測試者對于retargeting圖像失真的反應.圖像retargeting的尺度,圖像retargeting的算法,原始圖像的內容特性。通過大量的主觀實驗測試,我們構建了一個關于圖像retargeting的主觀數據庫。基于這個主觀數據庫,我們評價以及分析了幾個具有代表性的質量評價標準。Visual signals, including images, videos, etc., are affected by a wide variety of distortions during acquisition, compression, storage, processing, transmission, and reproduction processes, which result in perceptual quality degradation. As a result, perceptual quality assessment plays a very important role in today's visual signal processing and communication systems. In this thesis, quality assessment algorithms for evaluating the visual signal perceptual quality, as well as the applications on visual signal processing and communications, are investigated. The work consists of five parts as briefly summarized below.The first part focuses on the full-reference (FR) image quality assessment. The properties of the human visual system (HVS) are firstly investigated. Specifically, the visual horizontal effect (HE) and saliency properties over the structural distortions are modelled and incorporated into the structure similarity index (SSIM). Experimental results show significantly improved performance in matching the subjective ratings. Inspired by the developed FR image metric, a perceptual image compression scheme is developed, where the adaptive block-based super-resolution directed down-sampling is proposed. Experimental results demonstrated that the proposed image compression scheme can produce higher quality images in terms of both objective and subjective qualities, compared with the existing methods.The second part concerns the FR video quality assessment. The adaptive block-size transform (ABT) based just-noticeable difference (JND) for visual signals is investigated by considering the HVS characteristics, e.g., spatio-temporal contrast sensitivity function (CSF), eye movement, texture masking, spatial coherence, temporal consistency, properties of different block-size transforms, etc. It is verified that the developed ABT based JND can more accurately depict the HVS property, compared with the state-of-the-art JND models. The ABT based JND is thereby utilized to develop a simple perceptual quality metric for visual signals. Validations on the image and video subjective quality databases proved its effectiveness. As a result, the developed perceptual quality metric is employed for perceptual video coding, which can deliver video sequences of higher perceptual quality at the same bit-rates.The third part discusses the reduced-reference (RR) image quality assessment, which is developed by statistically modelling the coe cient distribution in the reorganized discrete cosine transform (RDCT) domain. The proposed RR metric exploits the identical statistical nature of the adjacent DCT coefficients, the mutual information (MI) relationship between adjacent RDCT coefficients, and the image energy distribution among different frequency components. Experimental results demonstrate that the proposed metric outperforms the representative RR image quality metrics, and even the FR quality metric, i.e., peak signal to noise ratio (PSNR). Furthermore, the extracted RR features can be easily encoded and embedded into the distorted images for quality monitoring during image communications.The fourth part investigates the RR video quality assessment. The RR features are extracted to exploit the spatial information loss and the temporal statistical characteristics of the inter-frame histogram. Evaluations on the video subjective quality databases demonstrate that the proposed method outperforms the representative RR video quality metrics, and even the FR metrics, such as PSNR, SSIM in matching the subjective ratings. Furthermore, only a small number of RR features is required to represent the original video sequence (each frame requires only 1 and 3 parameters to depict the spatial and temporal characteristics, respectively). By considering the computational complexity and the bit-rates for extracting and representing the RR features, the proposed RR quality metric can be utilized for quality monitoring during video transmissions, where the RR features for perceptual quality analysis can be easily embedded into the videos or transmitted through an ancillary data channel.The aforementioned perceptual quality metrics focus on the traditional distortions, such as JPEG image compression noise, H.264 video compression noise, and so on. In the last part, we investigate the distortions introduced during the image and video retargeting process. Nowadays, with the development of the consumer electronics, more and more visual signals have to communicate between different display devices of different resolutions. The retargeting algorithm is employed to adapt a source image of one resolution to be displayed in a device of a different resolution, which may introduce distortions during the retargeting process. We investigate the subjective responses on the perceptual qualities of the retargeted images, and discuss the subjective results from three perspectives, i.e., retargeting scales, retargeting methods, and source image content attributes. An image retargeting subjective quality database is built by performing a large-scale subjective study of image retargeting quality on a collection of retargeted images. Based on the built database, several representative quality metrics for retargeted images are evaluated and discussed.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Ma, Lin."December 2012."Thesis (Ph.D.)--Chinese University of Hong Kong, 2013.Includes bibliographical references (leaves 185-197).Abstract also in Chinese.Dedication --- p.iiAcknowledgments --- p.iiiAbstract --- p.viiiPublications --- p.xiNomenclature --- p.xviiContents --- p.xxivList of Figures --- p.xxviiiList of Tables --- p.xxxChapter 1 --- Introduction --- p.1Chapter 1.1 --- Motivation and Objectives --- p.1Chapter 1.2 --- Subjective Perceptual Quality Assessment --- p.5Chapter 1.3 --- Objective Perceptual Quality Assessment --- p.10Chapter 1.3.1 --- Visual Modelling Approach --- p.10Chapter 1.3.2 --- Engineering Modelling Approach --- p.15Chapter 1.3.3 --- Perceptual Subjective Quality Databases --- p.19Chapter 1.3.4 --- Performance Evaluation --- p.21Chapter 1.4 --- Thesis Contributions --- p.22Chapter 1.5 --- Organization of the Thesis --- p.24Chapter I --- Full Reference Quality Assessment --- p.26Chapter 2 --- Full Reference Image Quality Assessment --- p.27Chapter 2.1 --- Visual Horizontal Effect for Image Quality Assessment --- p.27Chapter 2.1.1 --- Introduction --- p.27Chapter 2.1.2 --- Proposed Image Quality Assessment Framework --- p.28Chapter 2.1.3 --- Experimental Results --- p.34Chapter 2.1.4 --- Conclusion --- p.36Chapter 2.2 --- Image Compression via Adaptive Block-Based Super-Resolution Directed Down-Sampling --- p.37Chapter 2.2.1 --- Introduction --- p.37Chapter 2.2.2 --- The Proposed Image Compression Framework --- p.38Chapter 2.2.3 --- Experimental Results --- p.42Chapter 2.2.4 --- Conclusion --- p.45Chapter 3 --- Full Reference Video Quality Assessment --- p.46Chapter 3.1 --- Adaptive Block-size Transform based Just-Noticeable Dfference Model for Visual Signals --- p.46Chapter 3.1.1 --- Introduction --- p.46Chapter 3.1.2 --- JND Model based on Transforms of Different Block Sizes --- p.48Chapter 3.1.3 --- Selection Strategy Between Transforms of Different Block Sizes --- p.53Chapter 3.1.4 --- JND Model Evaluation --- p.56Chapter 3.1.5 --- Conclusion --- p.60Chapter 3.2 --- Perceptual Quality Assessment --- p.60Chapter 3.2.1 --- Experimental Results --- p.62Chapter 3.2.2 --- Conclusion --- p.64Chapter 3.3 --- Motion Trajectory Based Visual Saliency for Video Quality Assessment --- p.65Chapter 3.3.1 --- Motion Trajectory based Visual Saliency for VQA --- p.66Chapter 3.3.2 --- New Quaternion Representation (QR) for Each frame --- p.66Chapter 3.3.3 --- Saliency Map Construction by QR --- p.67Chapter 3.3.4 --- Incorporating Visual Saliency with VQAs --- p.68Chapter 3.3.5 --- Experimental Results --- p.69Chapter 3.3.6 --- Conclusion --- p.72Chapter 3.4 --- Perceptual Video Coding --- p.72Chapter 3.4.1 --- Experimental Results --- p.75Chapter 3.4.2 --- Conclusion --- p.76Chapter II --- Reduced Reference Quality Assessment --- p.77Chapter 4 --- Reduced Reference Image Quality Assessment --- p.78Chapter 4.1 --- Introduction --- p.78Chapter 4.2 --- Reorganization Strategy of DCT Coefficients --- p.81Chapter 4.3 --- Relationship Analysis of Intra and Inter RDCT subbands --- p.83Chapter 4.4 --- Reduced Reference Feature Extraction in Sender Side --- p.88Chapter 4.4.1 --- Intra RDCT Subband Modeling --- p.89Chapter 4.4.2 --- Inter RDCT Subband Modeling --- p.91Chapter 4.4.3 --- Image Frequency Feature --- p.92Chapter 4.5 --- Perceptual Quality Analysis in the Receiver Side --- p.95Chapter 4.5.1 --- Intra RDCT Feature Difference Analysis --- p.95Chapter 4.5.2 --- Inter RDCT Feature Difference Analysis --- p.96Chapter 4.5.3 --- Image Frequency Feature Difference Analysis --- p.96Chapter 4.6 --- Experimental Results --- p.98Chapter 4.6.1 --- Efficiency of the DCT Reorganization Strategy --- p.98Chapter 4.6.2 --- Performance of the Proposed RR IQA --- p.99Chapter 4.6.3 --- Performance of the Proposed RR IQA over Each Individual Distortion Type --- p.105Chapter 4.6.4 --- Statistical Significance --- p.107Chapter 4.6.5 --- Performance Analysis of Each Component --- p.109Chapter 4.7 --- Conclusion --- p.111Chapter 5 --- Reduced Reference Video Quality Assessment --- p.113Chapter 5.1 --- Introduction --- p.113Chapter 5.2 --- Proposed Reduced Reference Video Quality Metric --- p.114Chapter 5.2.1 --- Reduced Reference Feature Extraction from Spatial Perspective --- p.116Chapter 5.2.2 --- Reduced Reference Feature Extraction from Temporal Perspective --- p.118Chapter 5.2.3 --- Visual Quality Analysis in Receiver Side --- p.121Chapter 5.3 --- Experimental Results --- p.123Chapter 5.3.1 --- Consistency Test of the Proposed RR VQA over Compressed Video Sequences --- p.124Chapter 5.3.2 --- Consistency Test of the Proposed RR VQA over Video Sequences with Simulated Distortions --- p.126Chapter 5.3.3 --- Performance Evaluation of the Proposed RR VQA on Compressed Video Sequences --- p.129Chapter 5.3.4 --- Performance Evaluation of the Proposed RR VQA on Video Sequences Containing Transmission Distortions --- p.133Chapter 5.3.5 --- Performance Analysis of Each Component --- p.135Chapter 5.4 --- Conclusion --- p.137Chapter III --- Retargeted Visual Signal Quality Assessment --- p.138Chapter 6 --- Image Retargeting Perceptual Quality Assessment --- p.139Chapter 6.1 --- Introduction --- p.139Chapter 6.2 --- Preparation of Database Building --- p.142Chapter 6.2.1 --- Source Image --- p.142Chapter 6.2.2 --- Retargeting Methods --- p.143Chapter 6.2.3 --- Subjective Testing --- p.146Chapter 6.3 --- Data Processing and Analysis for the Database --- p.150Chapter 6.3.1 --- Processing of Subjective Ratings --- p.150Chapter 6.3.2 --- Analysis and Discussion of the Subjective Ratings --- p.153Chapter 6.4 --- Objective Quality Metric for Retargeted Images --- p.162Chapter 6.4.1 --- Quality Metric Performances on the Constructed Image Retargeting Database --- p.162Chapter 6.4.2 --- Subjective Analysis of the Shape Distortion and Content Information Loss --- p.165Chapter 6.4.3 --- Discussion --- p.167Chapter 6.5 --- Conclusion --- p.169Chapter 7 --- Conclusions --- p.170Chapter 7.1 --- Conclusion --- p.170Chapter 7.2 --- Future Work --- p.173Chapter A --- Attributes of the Source Image --- p.176Chapter B --- Retargeted Image Name and the Corresponding Number --- p.179Chapter C --- Source Image Name and the Corresponding Number --- p.183Bibliography --- p.18

    深層学習に基づく画像圧縮と品質評価

    Get PDF
    早大学位記番号:新8427早稲田大
    corecore