55 research outputs found

    Perceptual quality assessment and processing for visual signals.

    Get PDF
    視覺信號,包括圖像,視頻等,在采集,壓縮,存儲,傳輸,重新生成的過程中都會被各種各樣的噪聲所影響,因此他們的主觀質量也就會降低。所以,主觀視覺質量在現今的視覺信號處理跟通訊系統中起到了很大的作用。這篇畢業論文主要討論質量評價的算法設計,以及這些衡量標準在視覺信號處理上的應用。這篇論文的工作主要包括以下五個方面。第一部分主要集中在具有完全套考原始圖像的圖像質量評價。首先我們研究人類視覺系統的特征。具體說來,視覺在結構化失真上面的水平特性和顯著特征會被建模然后應用到結構相似度(SSIM)這個衡量標準上。實驗顯示我們的方法明顯的提高了衡量標準典主觀評價的相似度。由這個質量衡量標準的啟發,我們設計了一個主觀圖像壓縮的方法。其中我們提出了一個自適應的塊大小的超分辨率算法指導的下采樣的算法。實驗結果證明提出的圖像壓縮算法無論在主觀還是在客觀層面都構建了高質量的圖像。第二個部分的工作主要討論具有完全參考原始視頻的視頻質量評價。考慮到人類視覺系統的特征,比如時空域的對此敏感函數,眼球的移動,紋理的遮掩特性,空間域的一致性,時間域的協調性,不同塊變換的特性,我們設計了一個自適應塊大小的失真閾值的模型。實驗證明,我們提出的失真閾值模型能夠更精確的描迷人類視覺系統的特性。基于這個自適應塊大小的失真閾值模型,我們設計了一個簡單的主觀質量評價標準。在公共的圓像以及視頻的主觀數據庫上的測試結果證明了這個簡單的評價標準的有效性。因此,我們把這個簡單的質量標準應用于視頻編碼系統中。它可以在同樣的碼率下提供更高主觀質量的視頻。第三部分我們討論具有部分參考信息的圖像質量評價。我們通過描迷重組后的離散余弦變換域的系數的統計分布來衡量圖像的主觀質量。提出的評價標準發掘了相鄰的離散余弦系數的相同統計特性,相鄰的重組離散余弦系數的互信息,以及圖像的能量在不同頻率下的分布。實驗結果證明我們提出的質量標準河以超越其他的具有部分參考信息的質量評價標準,甚至還超過了具有完全參考信息的質量評價標準。而且,提取的特征很容易被編碼以及隱藏到圖像中以便于在圖像通訊中進行質量監控。第四部分我們討論具有部分參考信息的視頻質量評價。我們提取的特征可以很好的描迷空間域的信息失,和時間域的相鄰兩幀間的直方圖的統計特性。在視頻主觀質量的數據庫上的實驗結果,也證明了提出的方法河以超越其他代表性的視頻質量評價標準,甚至是具有完全參考信息的質量評價標準, 譬如PSNR以及SSIM 。我們的方法只需要很少的特征來描迷每一幀視頻圖像。對于每一幀圖像,一個特征用于描迷空間域的特點,另外三個特征用于描述時間域的特點。考慮到計算的復雜度以及壓縮特征所需要的碼率,提出的方法河以很簡單的在視頻的傳輸過程中監控視頻的質量。之前的四部分提到的主觀質量評價標準主要集中在傳統的失真上面, 譬如JPEG 圖像壓縮, H.264視頻壓縮。在最后一部分,我們討論在圖像跟視頻的retargeting過程中的失真。現如今,隨著消費者電子的發展,視覺信號需要在不同分辨率的顯示設備上進行通訊交互。因此, retargeting的算法把同一個原始圖像適應于不同的分辨率的顯示設備。這樣的過程就會引入圖像的失真。我們研究了對于retargeting圖像主觀質量的測試者的分數,從三個方面進行討論測試者對于retargeting圖像失真的反應.圖像retargeting的尺度,圖像retargeting的算法,原始圖像的內容特性。通過大量的主觀實驗測試,我們構建了一個關于圖像retargeting的主觀數據庫。基于這個主觀數據庫,我們評價以及分析了幾個具有代表性的質量評價標準。Visual signals, including images, videos, etc., are affected by a wide variety of distortions during acquisition, compression, storage, processing, transmission, and reproduction processes, which result in perceptual quality degradation. As a result, perceptual quality assessment plays a very important role in today's visual signal processing and communication systems. In this thesis, quality assessment algorithms for evaluating the visual signal perceptual quality, as well as the applications on visual signal processing and communications, are investigated. The work consists of five parts as briefly summarized below.The first part focuses on the full-reference (FR) image quality assessment. The properties of the human visual system (HVS) are firstly investigated. Specifically, the visual horizontal effect (HE) and saliency properties over the structural distortions are modelled and incorporated into the structure similarity index (SSIM). Experimental results show significantly improved performance in matching the subjective ratings. Inspired by the developed FR image metric, a perceptual image compression scheme is developed, where the adaptive block-based super-resolution directed down-sampling is proposed. Experimental results demonstrated that the proposed image compression scheme can produce higher quality images in terms of both objective and subjective qualities, compared with the existing methods.The second part concerns the FR video quality assessment. The adaptive block-size transform (ABT) based just-noticeable difference (JND) for visual signals is investigated by considering the HVS characteristics, e.g., spatio-temporal contrast sensitivity function (CSF), eye movement, texture masking, spatial coherence, temporal consistency, properties of different block-size transforms, etc. It is verified that the developed ABT based JND can more accurately depict the HVS property, compared with the state-of-the-art JND models. The ABT based JND is thereby utilized to develop a simple perceptual quality metric for visual signals. Validations on the image and video subjective quality databases proved its effectiveness. As a result, the developed perceptual quality metric is employed for perceptual video coding, which can deliver video sequences of higher perceptual quality at the same bit-rates.The third part discusses the reduced-reference (RR) image quality assessment, which is developed by statistically modelling the coe cient distribution in the reorganized discrete cosine transform (RDCT) domain. The proposed RR metric exploits the identical statistical nature of the adjacent DCT coefficients, the mutual information (MI) relationship between adjacent RDCT coefficients, and the image energy distribution among different frequency components. Experimental results demonstrate that the proposed metric outperforms the representative RR image quality metrics, and even the FR quality metric, i.e., peak signal to noise ratio (PSNR). Furthermore, the extracted RR features can be easily encoded and embedded into the distorted images for quality monitoring during image communications.The fourth part investigates the RR video quality assessment. The RR features are extracted to exploit the spatial information loss and the temporal statistical characteristics of the inter-frame histogram. Evaluations on the video subjective quality databases demonstrate that the proposed method outperforms the representative RR video quality metrics, and even the FR metrics, such as PSNR, SSIM in matching the subjective ratings. Furthermore, only a small number of RR features is required to represent the original video sequence (each frame requires only 1 and 3 parameters to depict the spatial and temporal characteristics, respectively). By considering the computational complexity and the bit-rates for extracting and representing the RR features, the proposed RR quality metric can be utilized for quality monitoring during video transmissions, where the RR features for perceptual quality analysis can be easily embedded into the videos or transmitted through an ancillary data channel.The aforementioned perceptual quality metrics focus on the traditional distortions, such as JPEG image compression noise, H.264 video compression noise, and so on. In the last part, we investigate the distortions introduced during the image and video retargeting process. Nowadays, with the development of the consumer electronics, more and more visual signals have to communicate between different display devices of different resolutions. The retargeting algorithm is employed to adapt a source image of one resolution to be displayed in a device of a different resolution, which may introduce distortions during the retargeting process. We investigate the subjective responses on the perceptual qualities of the retargeted images, and discuss the subjective results from three perspectives, i.e., retargeting scales, retargeting methods, and source image content attributes. An image retargeting subjective quality database is built by performing a large-scale subjective study of image retargeting quality on a collection of retargeted images. Based on the built database, several representative quality metrics for retargeted images are evaluated and discussed.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Ma, Lin."December 2012."Thesis (Ph.D.)--Chinese University of Hong Kong, 2013.Includes bibliographical references (leaves 185-197).Abstract also in Chinese.Dedication --- p.iiAcknowledgments --- p.iiiAbstract --- p.viiiPublications --- p.xiNomenclature --- p.xviiContents --- p.xxivList of Figures --- p.xxviiiList of Tables --- p.xxxChapter 1 --- Introduction --- p.1Chapter 1.1 --- Motivation and Objectives --- p.1Chapter 1.2 --- Subjective Perceptual Quality Assessment --- p.5Chapter 1.3 --- Objective Perceptual Quality Assessment --- p.10Chapter 1.3.1 --- Visual Modelling Approach --- p.10Chapter 1.3.2 --- Engineering Modelling Approach --- p.15Chapter 1.3.3 --- Perceptual Subjective Quality Databases --- p.19Chapter 1.3.4 --- Performance Evaluation --- p.21Chapter 1.4 --- Thesis Contributions --- p.22Chapter 1.5 --- Organization of the Thesis --- p.24Chapter I --- Full Reference Quality Assessment --- p.26Chapter 2 --- Full Reference Image Quality Assessment --- p.27Chapter 2.1 --- Visual Horizontal Effect for Image Quality Assessment --- p.27Chapter 2.1.1 --- Introduction --- p.27Chapter 2.1.2 --- Proposed Image Quality Assessment Framework --- p.28Chapter 2.1.3 --- Experimental Results --- p.34Chapter 2.1.4 --- Conclusion --- p.36Chapter 2.2 --- Image Compression via Adaptive Block-Based Super-Resolution Directed Down-Sampling --- p.37Chapter 2.2.1 --- Introduction --- p.37Chapter 2.2.2 --- The Proposed Image Compression Framework --- p.38Chapter 2.2.3 --- Experimental Results --- p.42Chapter 2.2.4 --- Conclusion --- p.45Chapter 3 --- Full Reference Video Quality Assessment --- p.46Chapter 3.1 --- Adaptive Block-size Transform based Just-Noticeable Dfference Model for Visual Signals --- p.46Chapter 3.1.1 --- Introduction --- p.46Chapter 3.1.2 --- JND Model based on Transforms of Different Block Sizes --- p.48Chapter 3.1.3 --- Selection Strategy Between Transforms of Different Block Sizes --- p.53Chapter 3.1.4 --- JND Model Evaluation --- p.56Chapter 3.1.5 --- Conclusion --- p.60Chapter 3.2 --- Perceptual Quality Assessment --- p.60Chapter 3.2.1 --- Experimental Results --- p.62Chapter 3.2.2 --- Conclusion --- p.64Chapter 3.3 --- Motion Trajectory Based Visual Saliency for Video Quality Assessment --- p.65Chapter 3.3.1 --- Motion Trajectory based Visual Saliency for VQA --- p.66Chapter 3.3.2 --- New Quaternion Representation (QR) for Each frame --- p.66Chapter 3.3.3 --- Saliency Map Construction by QR --- p.67Chapter 3.3.4 --- Incorporating Visual Saliency with VQAs --- p.68Chapter 3.3.5 --- Experimental Results --- p.69Chapter 3.3.6 --- Conclusion --- p.72Chapter 3.4 --- Perceptual Video Coding --- p.72Chapter 3.4.1 --- Experimental Results --- p.75Chapter 3.4.2 --- Conclusion --- p.76Chapter II --- Reduced Reference Quality Assessment --- p.77Chapter 4 --- Reduced Reference Image Quality Assessment --- p.78Chapter 4.1 --- Introduction --- p.78Chapter 4.2 --- Reorganization Strategy of DCT Coefficients --- p.81Chapter 4.3 --- Relationship Analysis of Intra and Inter RDCT subbands --- p.83Chapter 4.4 --- Reduced Reference Feature Extraction in Sender Side --- p.88Chapter 4.4.1 --- Intra RDCT Subband Modeling --- p.89Chapter 4.4.2 --- Inter RDCT Subband Modeling --- p.91Chapter 4.4.3 --- Image Frequency Feature --- p.92Chapter 4.5 --- Perceptual Quality Analysis in the Receiver Side --- p.95Chapter 4.5.1 --- Intra RDCT Feature Difference Analysis --- p.95Chapter 4.5.2 --- Inter RDCT Feature Difference Analysis --- p.96Chapter 4.5.3 --- Image Frequency Feature Difference Analysis --- p.96Chapter 4.6 --- Experimental Results --- p.98Chapter 4.6.1 --- Efficiency of the DCT Reorganization Strategy --- p.98Chapter 4.6.2 --- Performance of the Proposed RR IQA --- p.99Chapter 4.6.3 --- Performance of the Proposed RR IQA over Each Individual Distortion Type --- p.105Chapter 4.6.4 --- Statistical Significance --- p.107Chapter 4.6.5 --- Performance Analysis of Each Component --- p.109Chapter 4.7 --- Conclusion --- p.111Chapter 5 --- Reduced Reference Video Quality Assessment --- p.113Chapter 5.1 --- Introduction --- p.113Chapter 5.2 --- Proposed Reduced Reference Video Quality Metric --- p.114Chapter 5.2.1 --- Reduced Reference Feature Extraction from Spatial Perspective --- p.116Chapter 5.2.2 --- Reduced Reference Feature Extraction from Temporal Perspective --- p.118Chapter 5.2.3 --- Visual Quality Analysis in Receiver Side --- p.121Chapter 5.3 --- Experimental Results --- p.123Chapter 5.3.1 --- Consistency Test of the Proposed RR VQA over Compressed Video Sequences --- p.124Chapter 5.3.2 --- Consistency Test of the Proposed RR VQA over Video Sequences with Simulated Distortions --- p.126Chapter 5.3.3 --- Performance Evaluation of the Proposed RR VQA on Compressed Video Sequences --- p.129Chapter 5.3.4 --- Performance Evaluation of the Proposed RR VQA on Video Sequences Containing Transmission Distortions --- p.133Chapter 5.3.5 --- Performance Analysis of Each Component --- p.135Chapter 5.4 --- Conclusion --- p.137Chapter III --- Retargeted Visual Signal Quality Assessment --- p.138Chapter 6 --- Image Retargeting Perceptual Quality Assessment --- p.139Chapter 6.1 --- Introduction --- p.139Chapter 6.2 --- Preparation of Database Building --- p.142Chapter 6.2.1 --- Source Image --- p.142Chapter 6.2.2 --- Retargeting Methods --- p.143Chapter 6.2.3 --- Subjective Testing --- p.146Chapter 6.3 --- Data Processing and Analysis for the Database --- p.150Chapter 6.3.1 --- Processing of Subjective Ratings --- p.150Chapter 6.3.2 --- Analysis and Discussion of the Subjective Ratings --- p.153Chapter 6.4 --- Objective Quality Metric for Retargeted Images --- p.162Chapter 6.4.1 --- Quality Metric Performances on the Constructed Image Retargeting Database --- p.162Chapter 6.4.2 --- Subjective Analysis of the Shape Distortion and Content Information Loss --- p.165Chapter 6.4.3 --- Discussion --- p.167Chapter 6.5 --- Conclusion --- p.169Chapter 7 --- Conclusions --- p.170Chapter 7.1 --- Conclusion --- p.170Chapter 7.2 --- Future Work --- p.173Chapter A --- Attributes of the Source Image --- p.176Chapter B --- Retargeted Image Name and the Corresponding Number --- p.179Chapter C --- Source Image Name and the Corresponding Number --- p.183Bibliography --- p.18

    An evaluation of partial differential equations based digital inpainting algorithms

    Get PDF
    Partial Differential equations (PDEs) have been used to model various phenomena/tasks in different scientific and engineering endeavours. This thesis is devoted to modelling image inpainting by numerical implementations of certain PDEs. The main objectives of image inpainting include reconstructing damaged parts and filling-in regions in which data/colour information are missing. Different automatic and semi-automatic approaches to image inpainting have been developed including PDE-based, texture synthesis-based, exemplar-based, and hybrid approaches. Various challenges remain unresolved in reconstructing large size missing regions and/or missing areas with highly textured surroundings. Our main aim is to address such challenges by developing new advanced schemes with particular focus on using PDEs of different orders to preserve continuity of textural and geometric information in the surrounding of missing regions. We first investigated the problem of partial colour restoration in an image region whose greyscale channel is intact. A PDE-based solution is known that is modelled as minimising total variation of gradients in the different colour channels. We extend the applicability of this model to partial inpainting in other 3-channels colour spaces (such as RGB where information is missing in any of the two colours), simply by exploiting the known linear/affine relationships between different colouring models in the derivation of a modified PDE solution obtained by using the Euler-Lagrange minimisation of the corresponding gradient Total Variation (TV). We also developed two TV models on the relations between greyscale and colour channels using the Laplacian operator and the directional derivatives of gradients. The corresponding Euler-Lagrange minimisation yields two new PDEs of different orders for partial colourisation. We implemented these solutions in both spatial and frequency domains. We measure the success of these models by evaluating known image quality measures in inpainted regions for sufficiently large datasets and scenarios. The results reveal that our schemes compare well with existing algorithms, but inpainting large regions remains a challenge. Secondly, we investigate the Total Inpainting (TI) problem where all colour channels are missing in an image region. Reviewing and implementing existing PDE-based total inpainting methods reveal that high order PDEs, applied to each colour channel separately, perform well but are influenced by the size of the region and the quantity of texture surrounding it. Here we developed a TI scheme that benefits from our partial inpainting approach and apply two PDE methods to recover the missing regions in the image. First, we extract the (Y, Cb, Cr) of the image outside the missing region, apply the above PDE methods for reconstructing the missing regions in the luminance channel (Y), and then use the colourisation method to recover the missing (Cb, Cr) colours in the region. We shall demonstrate that compared to existing TI algorithms, our proposed method (using 2 PDE methods) performs well when tested on large datasets of natural and face images. Furthermore, this helps understanding of the impact of the texture in the surrounding areas on inpainting and opens new research directions. Thirdly, we investigate existing Exemplar-Based Inpainting (EBI) methods that do not use PDEs but simultaneously propagate the texture and structure into the missing region by finding similar patches within the rest of image and copying them into the boundary of the missing region. The order of patch propagation is determined by a priority function, and the similarity is determined by matching criteria. We shall exploit recently emerging Topological Data Analysis (TDA) tools to create innovative EBI schemes, referred to as TEBI. TDA studies shapes of data/objects to quantify image texture in terms of connectivity and closeness properties of certain data landmarks. Such quantifications help determine the appropriate size of patch propagation and will be used to modify the patch propagation priority function using the geometrical properties of curvature of isophotes, and to improve the matching criteria of patches by calculating the correlation coefficients from the spatial, gradient and Laplacian domains. The performance of this TEBI method will be tested by applying it to natural dataset images, resulting in improved inpainting when compared with other EBI methods. Fourthly, the recent hybrid-based inpainting techniques are reviewed and a number of highly performing innovative hybrid techniques that combine the use of high order PDE methods with the TEBI method for the simultaneous rebuilding of the missing texture and structure regions in an image are proposed. Such a hybrid scheme first decomposes the image into texture and structure components, and then the missing regions in these components are recovered by TEBI and PDE based methods respectively. The performance of our hybrid schemes will be compared with two existing hybrid algorithms. Fifthly, we turn our attention to inpainting large missing regions, and develop an innovative inpainting scheme that uses the concept of seam carving to reduce this problem to that of inpainting a smaller size missing region that can be dealt with efficiently using the inpainting schemes developed above. Seam carving resizes images based on content-awareness of the image for both reduction and expansion without affecting those image regions that have rich information. The missing region of the seam-carved version will be recovered by the TEBI method, original image size is restored by adding the removed seams and the missing parts of the added seams are then repaired using a high order PDE inpainting scheme. The benefits of this approach in dealing with large missing regions are demonstrated. The extensive performance testing of the developed inpainting methods shows that these methods significantly outperform existing inpainting methods for such a challenging task. However, the performance is still not acceptable in recovering large missing regions in high texture and structure images, and hence we shall identify remaining challenges to be investigated in the future. We shall also extend our work by investigating recently developed deep learning based image/video colourisation, with the aim of overcoming its limitations and shortcoming. Finally, we should also describe our on-going research into using TDA to detect recently growing serious “malicious” use of inpainting to create Fake images/videos

    Digital image forensics via meta-learning and few-shot learning

    Get PDF
    Digital images are a substantial portion of the information conveyed by social media, the Internet, and television in our daily life. In recent years, digital images have become not only one of the public information carriers, but also a crucial piece of evidence. The widespread availability of low-cost, user-friendly, and potent image editing software and mobile phone applications facilitates altering images without professional expertise. Consequently, safeguarding the originality and integrity of digital images has become a difficulty. Forgers commonly use digital image manipulation to transmit misleading information. Digital image forensics investigates the irregular patterns that might result from image alteration. It is crucial to information security. Over the past several years, machine learning techniques have been effectively used to identify image forgeries. Convolutional Neural Networks(CNN) are a frequent machine learning approach. A standard CNN model could distinguish between original and manipulated images. In this dissertation, two CNN models are introduced to recognize seam carving and Gaussian filtering. Training a conventional CNN model for a new similar image forgery detection task, one must start from scratch. Additionally, many types of tampered image data are challenging to acquire or simulate. Meta-learning is an alternative learning paradigm in which a machine learning model gets experience across numerous related tasks and uses this expertise to improve its future learning performance. Few-shot learning is a method for acquiring knowledge from few data. It can classify images with as few as one or two examples per class. Inspired by meta-learning and few-shot learning, this dissertation proposed a prototypical networks model capable of resolving a collection of related image forgery detection problems. Unlike traditional CNN models, the proposed prototypical networks model does not need to be trained from scratch for a new task. Additionally, it drastically decreases the quantity of training images

    Preserving Trustworthiness and Confidentiality for Online Multimedia

    Get PDF
    Technology advancements in areas of mobile computing, social networks, and cloud computing have rapidly changed the way we communicate and interact. The wide adoption of media-oriented mobile devices such as smartphones and tablets enables people to capture information in various media formats, and offers them a rich platform for media consumption. The proliferation of online services and social networks makes it possible to store personal multimedia collection online and share them with family and friends anytime anywhere. Considering the increasing impact of digital multimedia and the trend of cloud computing, this dissertation explores the problem of how to evaluate trustworthiness and preserve confidentiality of online multimedia data. The dissertation consists of two parts. The first part examines the problem of evaluating trustworthiness of multimedia data distributed online. Given the digital nature of multimedia data, editing and tampering of the multimedia content becomes very easy. Therefore, it is important to analyze and reveal the processing history of a multimedia document in order to evaluate its trustworthiness. We propose a new forensic technique called ``Forensic Hash", which draws synergy between two related research areas of image hashing and non-reference multimedia forensics. A forensic hash is a compact signature capturing important information from the original multimedia document to assist forensic analysis and reveal processing history of a multimedia document under question. Our proposed technique is shown to have the advantage of being compact and offering efficient and accurate analysis to forensic questions that cannot be easily answered by convention forensic techniques. The answers that we obtain from the forensic hash provide valuable information on the trustworthiness of online multimedia data. The second part of this dissertation addresses the confidentiality issue of multimedia data stored with online services. The emerging cloud computing paradigm makes it attractive to store private multimedia data online for easy access and sharing. However, the potential of cloud services cannot be fully reached unless the issue of how to preserve confidentiality of sensitive data stored in the cloud is addressed. In this dissertation, we explore techniques that enable confidentiality-preserving search of encrypted multimedia, which can play a critical role in secure online multimedia services. Techniques from image processing, information retrieval, and cryptography are jointly and strategically applied to allow efficient rank-ordered search over encrypted multimedia database and at the same time preserve data confidentiality against malicious intruders and service providers. We demonstrate high efficiency and accuracy of the proposed techniques and provide a quantitative comparative study with conventional techniques based on heavy-weight cryptography primitives

    Light field image processing: an overview

    Get PDF
    Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data

    Texture and Colour in Image Analysis

    Get PDF
    Research in colour and texture has experienced major changes in the last few years. This book presents some recent advances in the field, specifically in the theory and applications of colour texture analysis. This volume also features benchmarks, comparative evaluations and reviews

    Finding Objects of Interest in Images using Saliency and Superpixels

    Get PDF
    The ability to automatically find objects of interest in images is useful in the areas of compression, indexing and retrieval, re-targeting, and so on. There are two classes of such algorithms – those that find any object of interest with no prior knowledge, independent of the task, and those that find specific objects of interest known a priori. The former class of algorithms tries to detect objects in images that stand-out, i.e. are salient, by virtue of being different from the rest of the image and consequently capture our attention. The detection is generic in this case as there is no specific object we are trying to locate. The latter class of algorithms detects specific known objects of interest and often requires training using features extracted from known examples. In this thesis we address various aspects of finding objects of interest under the topics of saliency detection and object detection. We present two saliency detection algorithms that rely on the principle of center-surround contrast. These two algorithms are shown to be superior to several state-of-the-art techniques in terms of precision and recall measures with respect to a ground truth. They output full-resolution saliency maps, are simpler to implement, and are computationally more efficient than most existing algorithms. We further establish the relevance of our saliency detection algorithms by using them for the known applications of object segmentation and image re-targeting. We first present three different techniques for salient object segmentation using our saliency maps that are based on clustering, graph-cuts, and geodesic distance based labeling. We then demonstrate the use of our saliency maps for a popular technique of content-aware image resizing and compare the result with that of existing methods. Our saliency maps prove to be a much more effective replacement for conventional gradient maps for providing automatic content-awareness. Just as it is important to find regions of interest in images, it is also important to find interesting images within a large collection of images. We therefore extend the notion of saliency detection in images to image databases. We propose an algorithm for finding salient images in a database. Apart from finding such images we also present two novel techniques for creating visually appealing summaries in the form of collages and mosaics. Finally, we address the problem of finding specific known objects of interest in images. Specifically, we deal with the feature extraction step that is a pre-requisite for any technique in this domain. In this context, we first present a superpixel segmentation algorithm that outperforms previous algorithms in terms quantitative measures of under-segmentation error and boundary recall. Our superpixel segmentation algorithm also offers several other advantages over existing algorithms like compactness, uniform size, control on the number of superpixels, and computational efficiency. We prove the effectiveness of our superpixels by deploying them in existing algorithms, specifically, an object class detection technique and a graph based algorithm, and improving their performance. We also present the result of using our superpixels in a technique for detecting mitochondria in noisy medical images

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE

    Task-based Adaptation of Graphical Content in Smart Visual Interfaces

    Get PDF
    To be effective visual representations must be adapted to their respective context of use, especially in so-called Smart Visual Interfaces striving to present specifically those information required for the task at hand. This thesis proposes a generic approach that facilitate the automatic generation of task-specific visual representations from suitable task descriptions. It is discussed how the approach is applied to four principal content types raster images, 2D vector and 3D graphics as well as data visualizations, and how existing display techniques can be integrated into the approach.Effektive visuelle Repräsentationen müssen an den jeweiligen Nutzungskontext angepasst sein, insbesondere in sog. Smart Visual Interfaces, welche anstreben, möglichst genau für die aktuelle Aufgabe benötigte Informationen anzubieten. Diese Arbeit entwirft einen generischen Ansatz zur automatischen Erzeugung aufgabenspezifischer Darstellungen anhand geeigneter Aufgabenbeschreibungen. Es wird gezeigt, wie dieser Ansatz auf vier grundlegende Inhaltstypen Rasterbilder, 2D-Vektor- und 3D-Grafik sowie Datenvisualisierungen anwendbar ist, und wie existierende Darstellungstechniken integrierbar sind
    corecore