70,085 research outputs found

    An entropy-histogram approach for image similarity and face recognition

    Get PDF
    Image similarity and image recognition are modern and rapidly growing technologies because of their wide use in the field of digital image processing. It is possible to recognize the face image of a specific person by finding the similarity between the images of the same person face and this is what we will address in detail in this paper. In this paper, we designed two new measures for image similarity and image recognition simultaneously. The proposed measures are based mainly on a combination of information theory and joint histogram. Information theory has a high capability to predict the relationship between image intensity values. The joint histogram is based mainly on selecting a set of local pixel features to construct a multidimensional histogram. The proposed approach incorporates the concepts of entropy and a modified 1D version of the 2D joint histogram of the two images under test. Two entropy measures were considered, Shannon and Renyi, giving a rise to two joint histogram-based, information-theoretic similarity measures: SHS and RSM. The proposed methods have been tested against powerful Zernike-moments approach with Euclidean and Minkowski distance metrics for image recognition and well-known statistical approaches for image similarity such as structural similarity index measure (SSIM), feature similarity index measure (FSIM) and feature-based structural measure (FSM). A comparison with a recent information-theoretic measure (ISSIM) has also been considered. A measure of recognition confidence is introduced in this work based on similarity distance between the best match and the second-best match in the face database during the face recognition process. Simulation results using AT&T and FEI face databases show that the proposed approaches outperform existing image recognition methods in terms of recognition confidence. TID2008 and IVC image databases show that SHS and RSM outperform existing similarity methods in terms of similarity confidence

    Locally Orderless Registration

    Get PDF
    Image registration is an important tool for medical image analysis and is used to bring images into the same reference frame by warping the coordinate field of one image, such that some similarity measure is minimized. We study similarity in image registration in the context of Locally Orderless Images (LOI), which is the natural way to study density estimates and reveals the 3 fundamental scales: the measurement scale, the intensity scale, and the integration scale. This paper has three main contributions: Firstly, we rephrase a large set of popular similarity measures into a common framework, which we refer to as Locally Orderless Registration, and which makes full use of the features of local histograms. Secondly, we extend the theoretical understanding of the local histograms. Thirdly, we use our framework to compare two state-of-the-art intensity density estimators for image registration: The Parzen Window (PW) and the Generalized Partial Volume (GPV), and we demonstrate their differences on a popular similarity measure, Normalized Mutual Information (NMI). We conclude, that complicated similarity measures such as NMI may be evaluated almost as fast as simple measures such as Sum of Squared Distances (SSD) regardless of the choice of PW and GPV. Also, GPV is an asymmetric measure, and PW is our preferred choice.Comment: submitte

    Diversity, Assortment, Dissimilarity, Variety: A Study of Diversity Measures Using Low Level Features for Video Retrieval

    Get PDF
    In this paper we present a number of methods for re-ranking video search results in order to introduce diversity into the set of search results. The usefulness of these approaches is evaluated in comparison with similarity based measures, for the TRECVID 2007 collection and tasks [11]. For the MAP of the search results we find that some of our approaches perform as well as similarity based methods. We also find that some of these results can improve the P@N values for some of the lower N values. The most successful of these approaches was then implemented in an interactive search system for the TRECVID 2008 interactive search tasks. The responses from the users indicate that they find the more diverse search results extremely useful

    An improved spatiogram similarity measure for robust object localisation

    Get PDF
    Spatiograms were introduced as a generalisation of the commonly used histogram, providing the flexibility of adding spatial context information to the feature distribution information of a histogram. The originally proposed spatiogram comparison measure has significant disadvantages that we detail here. We propose an improved measure based on deriving the Bhattacharyya coefficient for an infinite number of spatial-feature bins. Its advantages over the previous measure and over histogram-based matching are demonstrated in object tracking scenarios

    Similarity measures for mid-surface quality evaluation

    Get PDF
    Mid-surface models are widely used in engineering analysis to simplify the analysis of thin-walled parts, but it can be difficult to ensure that the mid-surface model is representative of the solid part from which it was generated. This paper proposes two similarity measures that can be used to evaluate the quality of a mid-surface model by comparing it to a solid model of the same part. Two similarity measures are proposed; firstly a geometric similarity evaluation technique based on the Hausdorff distance and secondly a topological similarity evaluation method which uses geometry graph attributes as the basis for comparison. Both measures are able to provide local and global similarity evaluation for the models. The proposed methods have been implemented in a software demonstrator and tested on a selection of representative models. They have been found to be effective for identifying geometric and topological errors in mid-surface models and are applicable to a wide range of practical thin-walled designs

    How is Gaze Influenced by Image Transformations? Dataset and Model

    Full text link
    Data size is the bottleneck for developing deep saliency models, because collecting eye-movement data is very time consuming and expensive. Most of current studies on human attention and saliency modeling have used high quality stereotype stimuli. In real world, however, captured images undergo various types of transformations. Can we use these transformations to augment existing saliency datasets? Here, we first create a novel saliency dataset including fixations of 10 observers over 1900 images degraded by 19 types of transformations. Second, by analyzing eye movements, we find that observers look at different locations over transformed versus original images. Third, we utilize the new data over transformed images, called data augmentation transformation (DAT), to train deep saliency models. We find that label preserving DATs with negligible impact on human gaze boost saliency prediction, whereas some other DATs that severely impact human gaze degrade the performance. These label preserving valid augmentation transformations provide a solution to enlarge existing saliency datasets. Finally, we introduce a novel saliency model based on generative adversarial network (dubbed GazeGAN). A modified UNet is proposed as the generator of the GazeGAN, which combines classic skip connections with a novel center-surround connection (CSC), in order to leverage multi level features. We also propose a histogram loss based on Alternative Chi Square Distance (ACS HistLoss) to refine the saliency map in terms of luminance distribution. Extensive experiments and comparisons over 3 datasets indicate that GazeGAN achieves the best performance in terms of popular saliency evaluation metrics, and is more robust to various perturbations. Our code and data are available at: https://github.com/CZHQuality/Sal-CFS-GAN

    A framework for quantitative analysis of user-generated spatial data

    Get PDF
    This paper proposes a new framework for automated analysis of game-play metrics for aiding game designers in finding out the critical aspects of the game caused by factors like design modications, change in playing style, etc. The core of the algorithm measures similarity between spatial distribution of user generated in-game events and automatically ranks them in order of importance. The feasibility of the method is demonstrated on a data set collected from a modern, multiplayer First Person Shooter, together with application examples of its use. The proposed framework can be used to accompany traditional testing tools and make the game design process more efficient
    • 

    corecore