1,803 research outputs found

    An Investigation on Text-Based Cross-Language Picture Retrieval Effectiveness through the Analysis of User Queries

    Get PDF
    Purpose: This paper describes a study of the queries generated from a user experiment for cross-language information retrieval (CLIR) from a historic image archive. Italian speaking users generated 618 queries for a set of known-item search tasks. The queries generated by user’s interaction with the system have been analysed and the results used to suggest recommendations for the future development of cross-language retrieval systems for digital image libraries. Methodology: A controlled lab-based user study was carried out using a prototype Italian-English image retrieval system. Participants were asked to carry out searches for 16 images provided to them, a known-item search task. User’s interactions with the system were recorded and queries were analysed manually quantitatively and qualitatively. Findings: Results highlight the diversity in requests for similar visual content and the weaknesses of Machine Translation for query translation. Through the manual translation of queries we show the benefits of using high-quality translation resources. The results show the individual characteristics of user’s whilst performing known-item searches and the overlap obtained between query terms and structured image captions, highlighting the use of user’s search terms for objects within the foreground of an image. Limitations and Implications: This research looks in-depth into one case of interaction and one image repository. Despite this limitation, the discussed results are likely to be valid across other languages and image repository. Value: The growing quantity of digital visual material in digital libraries offers the potential to apply techniques from CLIR to provide cross-language information access services. However, to develop effective systems requires studying user’s search behaviours, particularly in digital image libraries. The value of this paper is in the provision of empirical evidence to support recommendations for effective cross-language image retrieval system design.</p

    Concept hierarchy across languages in text-based image retrieval: a user evaluation

    Get PDF
    The University of Sheffield participated in Interactive ImageCLEF 2005 with a comparative user evaluation of two interfaces: one displaying search results as a list, the other organizing retrieved images into a hierarchy of concepts displayed on the interface as an interactive menu. Data was analysed with respect to effectiveness (number of images retrieved), efficiency (time needed) and user satisfaction (opinions from questionnaires). Effectiveness and efficiency were calculated at both 5 minutes (CLEF condition) and at final time. The list was marginally more effective than the menu at 5 minutes (no statistical significance) but the two were equal at final time showing the menu needs more time to be effectively used. The list was more efficient at both 5 minutes and final time, although the difference was not statistically significant. Users preferred the menu (75% vs. 25% for the list) indicating it to be an interesting and engaging feature. An inspection of the logs showed that 11% of effective terms (i.e. no stop-words, single terms) were not translated and that another 5% were ill translations. Some of those terms were used by all participants and were fundamental for some of the tasks. Non translated and ill translated terms negatively affected the search, hierarchy generation and, results display. More work has to be carried out to test the system under different setting, e.g. using a dictionary instead of MT that appears to be ineffective in translating users’ queries that rarely are grammatically correct. The evaluation also indicated directions for a new interface design that allows the user to check query translation (in both input and output) and that incorporates visual content image retrieval to improve result organization

    Learning Cross-Modal Deep Embeddings for Multi-Object Image Retrieval using Text and Sketch

    Get PDF
    In this work we introduce a cross modal image retrieval system that allows both text and sketch as input modalities for the query. A cross-modal deep network architecture is formulated to jointly model the sketch and text input modalities as well as the the image output modality, learning a common embedding between text and images and between sketches and images. In addition, an attention model is used to selectively focus the attention on the different objects of the image, allowing for retrieval with multiple objects in the query. Experiments show that the proposed method performs the best in both single and multiple object image retrieval in standard datasets.Comment: Accepted at ICPR 201

    MIRACLE evaluation of results for ImageCLEF 2003

    Get PDF
    ImageCLEF is a new pilot experiment introduced in CLEF 2003. It is devoted to the cross language retrieval of images using textual descriptions related to images contents. This paper presents MIRACLE research team experiments and results obtained for this track

    Overview of the 2005 cross-language image retrieval track (ImageCLEF)

    Get PDF
    The purpose of this paper is to outline efforts from the 2005 CLEF crosslanguage image retrieval campaign (ImageCLEF). The aim of this CLEF track is to explore the use of both text and content-based retrieval methods for cross-language image retrieval. Four tasks were offered in the ImageCLEF track: a ad-hoc retrieval from an historic photographic collection, ad-hoc retrieval from a medical collection, an automatic image annotation task, and a user-centered (interactive) evaluation task that is explained in the iCLEF summary. 24 research groups from a variety of backgrounds and nationalities (14 countries) participated in ImageCLEF. In this paper we describe the ImageCLEF tasks, submissions from participating groups and summarise the main fndings
    • …
    corecore