125,567 research outputs found

    An Information Theoretic Approach to Content Based Image Retrieval.

    Get PDF
    We propose an information theoretic approach to the representation and comparison of color features in digital images to handle various problems in the area of content-based image retrieval. The interpretation of color histograms as joint probability density functions enables the use of a wide range of concepts from information theory to be considered in the extraction of color features from images and the computation of similarity between pairs of images. The entropy of an image is a measure of the randomness of the color distribution in an image. Rather than replacing color histograms as an image representation, we demonstrate that image entropy can be used to augment color histograms for more efficient image retrieval. We propose an indexing algorithm in which image entropy is used to drastically reduce the search space for color histogram computations. Our experimental tests applied to an image database with 10,000 images suggest that the image entropy-based indexing algorithm is scalable for image retrieval of large image databases. We also proposed a new similarity measure called the maximum relative entropy measure for comparing image feature vectors that represent probability density functions. This measure is an improvement of the Kullback-Leibler number in that it is non-negative and satisfies the identity and symmetry axioms. We also propose a new usability paradigm called Query By Example Sets (QBES) that allows users, particularly novice users, the ability to express queries in terms of multiple images

    Image Slicing and Statistical Layer Approaches for Content-Based Image Retrieval

    Get PDF
    Two new approaches for colour features representation and comparison in digital images to handle various problems in the field of content-based image retrieval are proposed. The first approach is a double-layered system utilising a new technique, which is based on image slicing, combined with statistical features extracted and compared in each layer (ISSL). The images database is filtered in the first layer based on the similarities of brightness compared with the query image and ranked in the second layer, based on the similarities of the contrast values between the query image and the set of candidate images retrieved through the first layer. Although different distance measurements are available, the city block known as L1-norm distance measurement is used. This is due to its speed efficiency and accuracy. Different experiments are applied to different database sets, containing different number of images. The results show that the approach is scalable to the varying size of the database, robust, accurate, and fast. A comparison between the colour histogram approach and the proposed approach shows that the proposed system is more accurate and the speed of performance is much better. A new paradigm to choose the proper threshold value is proposed based on the autocorrelation of the distance vector. Moreover, an image retrieval system based on entropy as a visual discriminator is developed and compared with ISSL. The results show that the proposed ISSL approach is able to achieve better precision and reaches higher recall levels as compared with entropy approach. The second proposed technique for colour based retrieval is the Eigenvalues approach. Findings show that the interpretation of the Eigenvalues, as identity or signature for the square matrix, makes it possible to map this concept to the different bands of the image. The approach relies on calculating the accumulative distances between the query image and the images database, using the accumulative Eigenvalues of each band. The approach is tested, using different image queries over different database sets and the results are promising. Furthermore, the proposed approach is compared with ISSL approach and entropy approach, using different query images over a database set of 2000 images. In addition, a shape-based retrieval system is proposed. The system is double-layered, in which the first layer is used to filter the images database based on colour similarity. This allows the reduction in the number of candidate images, which need to be manipulated, using the shape retrieval technique in the second layer. The technique utilises a low-level image processing operations with “Dilate” as a morphological operator. Laplacian of Gaussian (LoG) is used to smoothen and detect the edges of the objects. Dilate on the other hand is used to solidify the object and fill in the holes, and correlation coefficient is proposed as a new means to shape similarity measurement. Experiments show that the approach is fast, flexible, and the retrieval of images is highly accurate. It is also able to overcome the numerous problems that are associated with the usage of the low-level image processing operation in image retrieval

    Unsupervised edge map scoring: a statistical complexity approach

    Full text link
    We propose a new Statistical Complexity Measure (SCM) to qualify edge maps without Ground Truth (GT) knowledge. The measure is the product of two indices, an \emph{Equilibrium} index E\mathcal{E} obtained by projecting the edge map into a family of edge patterns, and an \emph{Entropy} index H\mathcal{H}, defined as a function of the Kolmogorov Smirnov (KS) statistic. This new measure can be used for performance characterization which includes: (i)~the specific evaluation of an algorithm (intra-technique process) in order to identify its best parameters, and (ii)~the comparison of different algorithms (inter-technique process) in order to classify them according to their quality. Results made over images of the South Florida and Berkeley databases show that our approach significantly improves over Pratt's Figure of Merit (PFoM) which is the objective reference-based edge map evaluation standard, as it takes into account more features in its evaluation

    A generalized entropy-based two-phase threshold algorithm for noisy medical image edge detection

    Full text link
    [EN] Edge detection in medical imaging is a significant task for object recognition of human organs and is considered a pre-processing step in medical image segmentation and reconstruction. This article proposes an efficient approach based on generalized Hill entropy to find a good solution for detecting edges under noisy conditions in medical images. The proposed algorithm uses a two-phase thresholding: firstly, a global threshold calculated by means of generalized Hill entropy is used to separate the image into object and background. Afterwards, a local threshold value is determined for each part of the image. The final edge map image is a combination of these two separate images based on the three calculated thresholds. The performance of the proposed algorithm is compared to Canny and Tsallis entropy using sets of medical images corrupted by various types of noise. We used Pratt's Figure Of Merit (PFOM) as a quantitative measure for an objective comparison. Experimental results indicated that the proposed algorithm displayed superior noise resilience and better edge detection than Canny and Tsallis entropy methods for the four different types of noise analyzed, and thus it can be considered as a very interesting edge detection algorithm on noisy medical images. (c) 2017 Sharif University of Technology. All rights reserved.This work was supported in part by the Spanish Ministerio de Economia y Competitividad (MINECO) and by FEDER funds under Grant BFU2015-64380-C2-2-R.Elaraby, A.; Moratal, D. (2017). A generalized entropy-based two-phase threshold algorithm for noisy medical image edge detection. Scientia Iranica. 24(6):3247-3256. https://doi.org/10.24200/sci.2017.43593247325624
    corecore