265,537 research outputs found

    User experiments with the Eurovision cross-language image retrieval system

    Get PDF
    In this paper we present Eurovision, a text-based system for cross-language (CL) image retrieval. The system is evaluated by multilingual users for two search tasks with the system configured in English and five other languages. To our knowledge this is the first published set of user experiments for CL image retrieval. We show that: (1) it is possible to create a usable multilingual search engine using little knowledge of any language other than English, (2) categorizing images assists the user's search, and (3) there are differences in the way users search between the proposed search tasks. Based on the two search tasks and user feedback, we describe important aspects of any CL image retrieval system

    RBIR Based on Signature Graph

    Full text link
    This paper approaches the image retrieval system on the base of visual features local region RBIR (region-based image retrieval). First of all, the paper presents a method for extracting the interest points based on Harris-Laplace to create the feature region of the image. Next, in order to reduce the storage space and speed up query image, the paper builds the binary signature structure to describe the visual content of image. Based on the image's binary signature, the paper builds the SG (signature graph) to classify and store image's binary signatures. Since then, the paper builds the image retrieval algorithm on SG through the similar measure EMD (earth mover's distance) between the image's binary signatures. Last but not least, the paper gives an image retrieval model RBIR, experiments and assesses the image retrieval method on Corel image database over 10,000 images.Comment: 4 pages, 4 figure

    Semantic Image Retrieval via Active Grounding of Visual Situations

    Full text link
    We describe a novel architecture for semantic image retrieval---in particular, retrieval of instances of visual situations. Visual situations are concepts such as "a boxing match," "walking the dog," "a crowd waiting for a bus," or "a game of ping-pong," whose instantiations in images are linked more by their common spatial and semantic structure than by low-level visual similarity. Given a query situation description, our architecture---called Situate---learns models capturing the visual features of expected objects as well the expected spatial configuration of relationships among objects. Given a new image, Situate uses these models in an attempt to ground (i.e., to create a bounding box locating) each expected component of the situation in the image via an active search procedure. Situate uses the resulting grounding to compute a score indicating the degree to which the new image is judged to contain an instance of the situation. Such scores can be used to rank images in a collection as part of a retrieval system. In the preliminary study described here, we demonstrate the promise of this system by comparing Situate's performance with that of two baseline methods, as well as with a related semantic image-retrieval system based on "scene graphs.

    An image retrieval system based on explicit and implicit feedback on a tablet computer

    Get PDF
    Our research aims at developing a image retrieval system which uses relevance feedback to build a hybrid search /recommendation system for images according to users’ inter ests. An image retrieval application running on a tablet computer gathers explicit feedback through the touchscreen but also uses multiple sensing technologies to gather implicit feedback such as emotion and action. A recommendation mechanism driven by collaborative filtering is implemented to verify our interaction design
    • …
    corecore