160 research outputs found

    Semantic spaces revisited: investigating the performance of auto-annotation and semantic retrieval using semantic spaces

    No full text
    Semantic spaces encode similarity relationships between objects as a function of position in a mathematical space. This paper discusses three different formulations for building semantic spaces which allow the automatic-annotation and semantic retrieval of images. The models discussed in this paper require that the image content be described in the form of a series of visual-terms, rather than as a continuous feature-vector. The paper also discusses how these term-based models compare to the latest state-of-the-art continuous feature models for auto-annotation and retrieval

    Semantic Retrieval and Automatic Annotation: Linear Transformations, Correlation and Semantic Spaces

    No full text
    This paper proposes a new technique for auto-annotation and semantic retrieval based upon the idea of linearly mapping an image feature space to a keyword space. The new technique is compared to several related techniques, and a number of salient points about each of the techniques are discussed and contrasted. The paper also discusses how these techniques might actually scale to a real-world retrieval problem, and demonstrates this though a case study of a semantic retrieval technique being used on a real-world data-set (with a mix of annotated and unannotated images) from a picture library

    Learning visual contexts for image annotation from Flickr groups

    Get PDF

    Learning visual contexts for image annotation from Flickr groups

    Get PDF

    Saliency for Image Description and Retrieval

    Get PDF
    We live in a world where we are surrounded by ever increasing numbers of images. More often than not, these images have very little metadata by which they can be indexed and searched. In order to avoid information overload, techniques need to be developed to enable these image collections to be searched by their content. Much of the previous work on image retrieval has used global features such as colour and texture to describe the content of the image. However, these global features are insufficient to accurately describe the image content when different parts of the image have different characteristics. This thesis initially discusses how this problem can be circumvented by using salient interest regions to select the areas of the image that are most interesting and generate local descriptors to describe the image characteristics in that region. The thesis discusses a number of different saliency detectors that are suitable for robust retrieval purposes and performs a comparison between a number of these region detectors. The thesis then discusses how salient regions can be used for image retrieval using a number of techniques, but most importantly, two techniques inspired from the field of textual information retrieval. Using these robust retrieval techniques, a new paradigm in image retrieval is discussed, whereby the retrieval takes place on a mobile device using a query image captured by a built-in camera. This paradigm is demonstrated in the context of an art gallery, in which the device can be used to find more information about particular images. The final chapter of the thesis discusses some approaches to bridging the semantic gap in image retrieval. The chapter explores ways in which un-annotated image collections can be searched by keyword. Two techniques are discussed; the first explicitly attempts to automatically annotate the un-annotated images so that the automatically applied annotations can be used for searching. The second approach does not try to explicitly annotate images, but rather, through the use of linear algebra, it attempts to create a semantic space in which images and keywords are positioned such that images are close to the keywords that represent them within the space

    Image Retrieval Using Auto Encoding Features In Deep Learning

    Get PDF
    The latest technologies and growth in availability of image storage in day to day life has made a vast storage place for the images in the database. Several devices which help in capturing the image contribute to a huge repository of images. Keeping in mind the daily input in the database, one must think of retrieving those images according to certain criteria mentioned. Several techniques such as shape of the object, Discrete Wavelet transform (DWT), texture features etc. were used in determining the type of image and classifying them. Segmentation also plays a vital role in image retrieval but the robustness is lacking in most of the cases. The process of retrieval mainly depends on the special characteristics possessed by an image rather than the whole image. Two types of image retrieval can be seen. One with a general object and the other which may be specific to some type of application. Modern deep neural networks for unsupervised feature learning like Deep Autoencoder (AE) learn embedded representations by stacking layers on top of each other. These learnt embedded-representations, however, may degrade as the AE network deepens due to vanishing gradient, resulting in decreased performance. We have introduced here the ResNet Autoencoder (RAE) and its convolutional version (C-RAE) for unsupervised feature based learning. The proposed model is tested on three distinct databases Corel1K, Cifar-10, Cifar-100 which differ in size. The presented algorithm have significantly reduced computation time and provided very high image retrieval levels of accuracy

    An explorative study of interface support for image searching

    Get PDF
    In this paper we study interfaces for image retrieval systems. Current image retrieval interfaces are limited to providing query facilities and result presentation. The user can inspect the results and possibly provide feedback on their relevance for the current query. Our approach, in contrast, encourages the user to group and organise their search results and thus provide more fine-grained feedback for the system. It combines the search and management process, which - according to our hypothesis - helps the user to onceptualise their search tasks and to overcome the query formulation problem. An evaluation, involving young design-professionals and di®erent types of information seeking scenarios, shows that the proposed approach succeeds in encouraging the user to conceptualise their tasks and that it leads to increased user satisfaction. However, it could not be shown to increase performance. We identify the problems in the current setup, which when eliminated should lead to more effective searching overall

    Bayesian learning of concept ontology for automatic image annotation

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore