8 research outputs found

    Adaptive spectrum transformation by topology preserving on indefinite proximity data

    Get PDF
    Similarity-based representation generates indefinite matrices, which are inconsistent with classical kernel-based learning frameworks. In this paper, we present an adaptive spectrum transformation method that provides a positive semidefinite ( psd ) kernel consistent with the intrinsic geometry of proximity data. In the proposed method, an indefinite similarity matrix is rectified by maximizing the Euclidian fac- tor ( EF ) criterion, which represents the similarity of the resulting feature space to Euclidean space. This maximization is achieved by modifying volume elements through applying a conformal transform over the similarity matrix. We performed several experiments to evaluate the performance of the proposed method in comparison with flip, clip, shift , and square spectrum transformation techniques on similarity matrices. Applying the resulting psd matrices as kernels in dimensionality reduction and clustering problems confirms the success of the proposed approach in adapting to data and preserving its topological information. Our experiments show that in classification applications, the superiority of the proposed method is considerable when the negative eigenfraction of the similarity matrix is significant

    Online Multiple Kernel Similarity Learning for Visual Search

    Get PDF

    Design, implementation, and evaluation of scalable content-based image retrieval techniques.

    Get PDF
    Wong, Yuk Man.Thesis (M.Phil.)--Chinese University of Hong Kong, 2007.Includes bibliographical references (leaves 95-100).Abstracts in English and Chinese.Abstract --- p.iiAcknowledgement --- p.vChapter 1 --- Introduction --- p.1Chapter 1.1 --- Overview --- p.1Chapter 1.2 --- Contribution --- p.3Chapter 1.3 --- Organization of This Work --- p.5Chapter 2 --- Literature Review --- p.6Chapter 2.1 --- Content-based Image Retrieval --- p.6Chapter 2.1.1 --- Query Technique --- p.6Chapter 2.1.2 --- Relevance Feedback --- p.7Chapter 2.1.3 --- Previously Proposed CBIR systems --- p.7Chapter 2.2 --- Invariant Local Feature --- p.8Chapter 2.3 --- Invariant Local Feature Detector --- p.9Chapter 2.3.1 --- Harris Corner Detector --- p.9Chapter 2.3.2 --- DOG Extrema Detector --- p.10Chapter 2.3.3 --- Harris-Laplacian Corner Detector --- p.13Chapter 2.3.4 --- Harris-Affine Covariant Detector --- p.14Chapter 2.4 --- Invariant Local Feature Descriptor --- p.15Chapter 2.4.1 --- Scale Invariant Feature Transform (SIFT) --- p.15Chapter 2.4.2 --- Shape Context --- p.17Chapter 2.4.3 --- PCA-SIFT --- p.18Chapter 2.4.4 --- Gradient Location and Orientation Histogram (GLOH) --- p.19Chapter 2.4.5 --- Geodesic-Intensity Histogram (GIH) --- p.19Chapter 2.4.6 --- Experiment --- p.21Chapter 2.5 --- Feature Matching --- p.27Chapter 2.5.1 --- Matching Criteria --- p.27Chapter 2.5.2 --- Distance Measures --- p.28Chapter 2.5.3 --- Searching Techniques --- p.29Chapter 3 --- A Distributed Scheme for Large-Scale CBIR --- p.31Chapter 3.1 --- Overview --- p.31Chapter 3.2 --- Related Work --- p.33Chapter 3.3 --- Scalable Content-Based Image Retrieval Scheme --- p.34Chapter 3.3.1 --- Overview of Our Solution --- p.34Chapter 3.3.2 --- Locality-Sensitive Hashing --- p.34Chapter 3.3.3 --- Scalable Indexing Solutions --- p.35Chapter 3.3.4 --- Disk-Based Multi-Partition Indexing --- p.36Chapter 3.3.5 --- Parallel Multi-Partition Indexing --- p.37Chapter 3.4 --- Feature Representation --- p.43Chapter 3.5 --- Empirical Evaluation --- p.44Chapter 3.5.1 --- Experimental Testbed --- p.44Chapter 3.5.2 --- Performance Evaluation Metrics --- p.44Chapter 3.5.3 --- Experimental Setup --- p.45Chapter 3.5.4 --- Experiment I: Disk-Based Multi-Partition Indexing Approach --- p.45Chapter 3.5.5 --- Experiment II: Parallel-Based Multi-Partition Indexing Approach --- p.48Chapter 3.6 --- Application to WWW Image Retrieval --- p.55Chapter 3.7 --- Summary --- p.55Chapter 4 --- Image Retrieval System for IND Detection --- p.60Chapter 4.1 --- Overview --- p.60Chapter 4.1.1 --- Motivation --- p.60Chapter 4.1.2 --- Related Work --- p.61Chapter 4.1.3 --- Objective --- p.62Chapter 4.1.4 --- Contribution --- p.63Chapter 4.2 --- Database Construction --- p.63Chapter 4.2.1 --- Image Representations --- p.63Chapter 4.2.2 --- Index Construction --- p.64Chapter 4.2.3 --- Keypoint and Image Lookup Tables --- p.67Chapter 4.3 --- Database Query --- p.67Chapter 4.3.1 --- Matching Strategies --- p.68Chapter 4.3.2 --- Verification Processes --- p.71Chapter 4.3.3 --- Image Voting --- p.75Chapter 4.4 --- Performance Evaluation --- p.76Chapter 4.4.1 --- Evaluation Metrics --- p.76Chapter 4.4.2 --- Results --- p.77Chapter 4.4.3 --- Summary --- p.81Chapter 5 --- Shape-SIFT Feature Descriptor --- p.82Chapter 5.1 --- Overview --- p.82Chapter 5.2 --- Related Work --- p.83Chapter 5.3 --- SHAPE-SIFT Descriptors --- p.84Chapter 5.3.1 --- Orientation assignment --- p.84Chapter 5.3.2 --- Canonical orientation determination --- p.84Chapter 5.3.3 --- Keypoint descriptor --- p.87Chapter 5.4 --- Performance Evaluation --- p.88Chapter 5.5 --- Summary --- p.90Chapter 6 --- Conclusions and Future Work --- p.92Chapter 6.1 --- Conclusions --- p.92Chapter 6.2 --- Future Work --- p.93Chapter A --- Publication --- p.94Bibliography --- p.9

    Efficient image duplicate detection based on image analysis

    Get PDF
    This thesis is about the detection of duplicated images. More precisely, the developed system is able to discriminate possibly modified copies of original images from other unrelated images. The proposed method is referred to as content-based since it relies only on content analysis techniques rather than using image tagging as done in watermarking. The proposed content-based duplicate detection system classifies a test image by associating it with a label that corresponds to one of the original known images. The classification is performed in four steps. In the first step, the test image is described by using global statistics about its content. In the second step, the most likely original images are efficiently selected using a spatial indexing technique called R-Tree. The third step consists in using binary detectors to estimate the probability that the test image is a duplicate of the original images selected in the second step. Indeed, each original image known to the system is associated with an adapted binary detector, based on a support vector classifier, that estimates the probability that a test image is one of its duplicate. Finally, the fourth and last step consists in choosing the most probable original by picking that with the highest estimated probability. Comparative experiments have shown that the proposed content-based image duplicate detector greatly outperforms detectors using the same image description but based on a simpler distance functions rather than using a classification algorithm. Additional experiments are carried out so as to compare the proposed system with existing state of the art methods. Accordingly, it also outperforms the perceptual distance function method, which uses similar statistics to describe the image. While the proposed method is slightly outperformed by the key points method, it is five to ten times less complex in terms of computational requirements. Finally, note that the nature of this thesis is essentially exploratory since it is one of the first attempts to apply machine learning techniques to the relatively recent field of content-based image duplicate detection

    Efficient image duplicate detection based on image analysis

    Get PDF
    This thesis is about the detection of duplicated images. More precisely, the developed system is able to discriminate possibly modified copies of original images from other unrelated images. The proposed method is referred to as content-based since it relies only on content analysis techniques rather than using image tagging as done in watermarking. The proposed content-based duplicate detection system classifies a test image by associating it with a label that corresponds to one of the original known images. The classification is performed in four steps. In the first step, the test image is described by using global statistics about its content. In the second step, the most likely original images are efficiently selected using a spatial indexing technique called R-Tree. The third step consists in using binary detectors to estimate the probability that the test image is a duplicate of the original images selected in the second step. Indeed, each original image known to the system is associated with an adapted binary detector, based on a support vector classifier, that estimates the probability that a test image is one of its duplicate. Finally, the fourth and last step consists in choosing the most probable original by picking that with the highest estimated probability. Comparative experiments have shown that the proposed content-based image duplicate detector greatly outperforms detectors using the same image description but based on a simpler distance functions rather than using a classification algorithm. Additional experiments are carried out so as to compare the proposed system with existing state of the art methods. Accordingly, it also outperforms the perceptual distance function method, which uses similar statistics to describe the image. While the proposed method is slightly outperformed by the key points method, it is five to ten times less complex in terms of computational requirements. Finally, note that the nature of this thesis is essentially exploratory since it is one of the first attempts to apply machine learning techniques to the relatively recent field of content-based image duplicate detection

    Camera-independent learning and image quality assessment for super-resolution

    Get PDF
    An increasing number of applications require high-resolution images in situations where the access to the sensor and the knowledge of its specifications are limited. In this thesis, the problem of blind super-resolution is addressed, here defined as the estimation of a high-resolution image from one or more low-resolution inputs, under the condition that the degradation model parameters are unknown. The assessment of super-resolved results, using objective measures of image quality, is also addressed.Learning-based methods have been successfully applied to the single frame super-resolution problem in the past. However, sensor characteristics such as the Point Spread Function (PSF) must often be known. In this thesis, a learning-based approach is adapted to work without the knowledge of the PSF thus making the framework camera-independent. However, the goal is not only to super-resolve an image under this limitation, but also to provide an estimation of the best PSF, consisting of a theoretical model with one unknown parameter.In particular, two extensions of a method performing belief propagation on a Markov Random Field are presented. The first method finds the best PSF parameter by performing a search for the minimum mean distance between training examples and patches from the input image. In the second method, the best PSF parameter and the super-resolution result are found simultaneously by providing a range of possible PSF parameters from which the super-resolution algorithm will choose from. For both methods, a first estimate is obtained through blind deconvolution and an uncertainty is calculated in order to restrict the search.Both camera-independent adaptations are compared and analyzed in various experiments, and a set of key parameters are varied to determine their effect on both the super-resolution and the PSF parameter recovery results. The use of quality measures is thus essential to quantify the improvements obtained from the algorithms. A set of measures is chosen that represents different aspects of image quality: the signal fidelity, the perceptual quality and the localization and scale of the edges.Results indicate that both methods improve similarity to the ground truth and can in general refine the initial PSF parameter estimate towards the true value. Furthermore, the similarity measure results show that the chosen learning-based framework consistently improves a measure designed for perceptual quality
    corecore