53,247 research outputs found

    MIRACLE-FI at ImageCLEFphoto 2008: Experiences in merging text-based and content-based retrievals

    Get PDF
    This paper describes the participation of the MIRACLE consortium at the ImageCLEF Photographic Retrieval task of ImageCLEF 2008. In this is new participation of the group, our first purpose is to evaluate our own tools for text-based retrieval and for content-based retrieval using different similarity metrics and the aggregation OWA operator to fuse the three topic images. From the MIRACLE last year experience, we implemented a new merging module combining the text-based and the content-based information in three different ways: FILTER-N, ENRICH and TEXT-FILTER. The former approaches try to improve the text-based baseline results using the content-based results lists. The last one was used to select the relevant images to the content-based module. No clustering strategies were analyzed. Finally, 41 runs were submitted: 1 for the text-based baseline, 10 content-based runs, and 30 mixed experiments merging text and content-based results. Results in general can be considered nearly acceptable comparing with the best results of other groups. Obtained results from textbased retrieval are better than content-based. Merging both textual and visual retrieval we improve the text-based baseline when applying the ENRICH merging algorithm although visual results are lower than textual ones. From these results we were going to try to improve merged results by clustering methods applied to this image collection

    Unsupervised Content Based Image Retrieval by Combining Visual Features of an Image With A Threshold

    Get PDF
    Content-based image retrieval (CBIR) uses the visual features of an image such as color, shape and texture to represent and index the image. In a typical content based image retrieval system, a set of images that exhibit visual features similar to that of the query image are returned in response to a query. CLUE (CLUster based image rEtrieval) is a popular CBIR technique that retrieves images by clustering. In this paper, we propose a CBIR system that also retrieves images by clustering just like CLUE. But, the proposed system combines all the features (shape, color, and texture) with a threshold for the purpose. The combination of all the features provides a robust feature set for image retrieval. We evaluated the performance of the proposed system using images of varying size and resolution from image database and compared its performance with that of the other two existing CBIR systems namely UFM and CLUE. We have used four different resolutions of image. Experimentally, we find that the proposed system outperforms the other two existing systems in ecery resolution of imag

    Pengelompokan Gambar Berdasarkan Fitur Warna Dan Tekstur Menggunakan FGKA Clustering (Fast Genetics K-Means Algorithm) Untuk Pencocokan Gambar

    Get PDF
    A large collections of digital images are being created. Usually, the only way of searching these collections was by using meta data (like caption or keywords). This way is not effective, impractical, need a big size of database and giving inaccurate result. Recently, it has been developed many ways in image retrieval that use image content (color, shape, and texture) that more recognised with CBIR ( Content Based Images Retrieval). The use of centroid produced from clustered HSV Histogram and Gabor Filter using FGKA, can be used for searching parameter. FGKA is merger of Genetic Algorithm and Kmeans Clustering Algorithm. FGKA is always converge to global optimum. Image Clustering and Matching based on color-texture feature are better than based on color feature only, texture only or using non-clustering method. Keywords: Genetics Algorithm, K-Means Clustering, CBIR, HSV Histogram, Gabor Filter

    Real-time Multi-object Face Recognition Using Content Based Image Retrieval (CBIR)

    Get PDF
    Face recognition system in real time is divided into three processes, namely feature extraction, clustering, detection, and recognition. Each of these stages uses different methods, Local Binary Pattern (LBP), Agglomerative Hierarchical Clustering (AHC) and Euclidean Distance. Multi-face image search using Content Based Image Retrieval (CBIR) method. CBIR performs image search by image feature itself. Based on real time trial results, the accuracy value obtained is 61.64%. 

    Feature Selection for Image Retrieval based on Genetic Algorithm

    Get PDF
    This paper describes the development and implementation of feature selection for content based image retrieval. We are working on CBIR system with new efficient technique. In this system, we use multi feature extraction such as colour, texture and shape. The three techniques are used for feature extraction such as colour moment, gray level co- occurrence matrix and edge histogram descriptor. To reduce curse of dimensionality and find best optimal features from feature set using feature selection based on genetic algorithm. These features are divided into similar image classes using clustering for fast retrieval and improve the execution time. Clustering technique is done by k-means algorithm. The experimental result shows feature selection using GA reduces the time for retrieval and also increases the retrieval precision, thus it gives better and faster results as compared to normal image retrieval system. The result also shows precision and recall of proposed approach compared to previous approach for each image class. The CBIR system is more efficient and better performs using feature selection based on Genetic Algorithm

    Content Based Image Retrieval by Preprocessing Image Database

    Get PDF
    Increase in communication bandwidth, information content and the size of the multimedia databases have given rise to the concept of Content Based Image Retrieval (CBIR). Content based image retrieval is a technique that enables a user to extract similar images based on a query, from a database containing a large amount of images. A basic issue in designing a content based image retrieval system is to select the image features that best represent image content in a database. Current research in this area focuses on improving image retrieval accuracy. In this work, we have presented an ecient system for content based image retrieval. The system exploits the multiple features such as color, edge density, boolean edge density and histogram information features. The existing methods are concentrating on the relevance feedback techniques to improve the count of similar images related to a query from the raw image database. In this thesis, we propose a dierent strategy called preprocessing image database using k means clustering and genetic algorithm so that it will further helps to improve image retrieval accuracy. This can be achieved by taking multiple feature set, clustering algorithm and tness function for the genetic algorithms. Preprocessing image database is to cluster the similar images as homogeneous as possible and separate the dissimilar images as heterogeneous as possible. The main aim of this work is to nd the images that are most similar to the query image and new method is proposed for preprocessing image database via genetic algorithm for improved content based image retrieval system. The accuracy of our approach is presented by using performance metrics called confusion matrix, precison graph and F-measures. The clustering purity in more than half of the clusters has been above 90 percent purity

    Content Based Image Retrieval Using Colour, Texture and KNN

    Get PDF
    Image retrieval is increasingly becoming an interesting filed of research as the images that users store and process keep on rising both in number and size especially in digital databases. The images are stored on portable devices which users have used to capture these images. The aim of this research is to solve the issues experienced by users in image retrieval of digital images stored in their devices, ensuring that images requested are retrieved accurately from storage. The images are pre-processed to remove noise and refocus images to enhance mage content. The image retrieval is based on the content (Content Based Image Retrieval) where images are matched in a database based on the subject of the image.  In this paper, Corel image database is used with image pre-processing to ensure that image subjects are enhanced. Images are placed in classes and images are retrieved based on the users input. Euclidean distance method is used to determine the nearest objects, thus resulting in the least number of images retrieved by the system. Colour and texture features are used to generate the feature matrices on which the image comparison is made. For KNN algorithm, different values of K will be tested to determine best value for different classes of images. The performance of the design is compared to MATLAB image retrieval system using the same image data set. The results obtained show that the combination of colour, texture and KNN in image retrieval results in shorter computation time as compared to the performance of individual methods. Keywords: Image retrieval, KNN, clustering, image processin

    Neural network-based shape retrieval using moment invariants and Zernike moments.

    Get PDF
    Shape is one of the fundamental image features for use in Content-Based Image Retrieval (CBIR). Compared with other visual features such as color and texture, it is extremely powerful and provides capability for object recognition and similarity-based image retrieval. In this thesis, we propose a Neural Network-Based Shape Retrieval System using Moment Invariants and Zernike Moments. Moment Invariants and Zernike Moments are two region-based shape representation schemes and are derived from the shape in an image and serve as image features. k means clustering is used to group similar images in an image collection into k clusters whereas Neural Network is used to facilitate retrieval against a given query image. Neural Network is trained by the clustering result on all of the images in the collection using back-propagation algorithm. In this scheme, Neural Network serves as a classifier such that moments are inputs to the Neural Network and the output is one of the k classes that have the largest similarities to the query image. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2005 .C444. Source: Masters Abstracts International, Volume: 44-03, page: 1396. Thesis (M.Sc.)--University of Windsor (Canada), 2005

    Multi modal multi-semantic image retrieval

    Get PDF
    PhDThe rapid growth in the volume of visual information, e.g. image, and video can overwhelm users’ ability to find and access the specific visual information of interest to them. In recent years, ontology knowledge-based (KB) image information retrieval techniques have been adopted into in order to attempt to extract knowledge from these images, enhancing the retrieval performance. A KB framework is presented to promote semi-automatic annotation and semantic image retrieval using multimodal cues (visual features and text captions). In addition, a hierarchical structure for the KB allows metadata to be shared that supports multi-semantics (polysemy) for concepts. The framework builds up an effective knowledge base pertaining to a domain specific image collection, e.g. sports, and is able to disambiguate and assign high level semantics to ‘unannotated’ images. Local feature analysis of visual content, namely using Scale Invariant Feature Transform (SIFT) descriptors, have been deployed in the ‘Bag of Visual Words’ model (BVW) as an effective method to represent visual content information and to enhance its classification and retrieval. Local features are more useful than global features, e.g. colour, shape or texture, as they are invariant to image scale, orientation and camera angle. An innovative approach is proposed for the representation, annotation and retrieval of visual content using a hybrid technique based upon the use of an unstructured visual word and upon a (structured) hierarchical ontology KB model. The structural model facilitates the disambiguation of unstructured visual words and a more effective classification of visual content, compared to a vector space model, through exploiting local conceptual structures and their relationships. The key contributions of this framework in using local features for image representation include: first, a method to generate visual words using the semantic local adaptive clustering (SLAC) algorithm which takes term weight and spatial locations of keypoints into account. Consequently, the semantic information is preserved. Second a technique is used to detect the domain specific ‘non-informative visual words’ which are ineffective at representing the content of visual data and degrade its categorisation ability. Third, a method to combine an ontology model with xi a visual word model to resolve synonym (visual heterogeneity) and polysemy problems, is proposed. The experimental results show that this approach can discover semantically meaningful visual content descriptions and recognise specific events, e.g., sports events, depicted in images efficiently. Since discovering the semantics of an image is an extremely challenging problem, one promising approach to enhance visual content interpretation is to use any associated textual information that accompanies an image, as a cue to predict the meaning of an image, by transforming this textual information into a structured annotation for an image e.g. using XML, RDF, OWL or MPEG-7. Although, text and image are distinct types of information representation and modality, there are some strong, invariant, implicit, connections between images and any accompanying text information. Semantic analysis of image captions can be used by image retrieval systems to retrieve selected images more precisely. To do this, a Natural Language Processing (NLP) is exploited firstly in order to extract concepts from image captions. Next, an ontology-based knowledge model is deployed in order to resolve natural language ambiguities. To deal with the accompanying text information, two methods to extract knowledge from textual information have been proposed. First, metadata can be extracted automatically from text captions and restructured with respect to a semantic model. Second, the use of LSI in relation to a domain-specific ontology-based knowledge model enables the combined framework to tolerate ambiguities and variations (incompleteness) of metadata. The use of the ontology-based knowledge model allows the system to find indirectly relevant concepts in image captions and thus leverage these to represent the semantics of images at a higher level. Experimental results show that the proposed framework significantly enhances image retrieval and leads to narrowing of the semantic gap between lower level machinederived and higher level human-understandable conceptualisation
    corecore