27,502 research outputs found

    Improving content based image retrieval by identifying least and most correlated visual words

    Get PDF
    In this paper, we propose a model for direct incorporation of im- age content into a (short-term) user profile based on correlations between visual words and adaptation of the similarity measure. The relationships between visual words at different contextual levels are explored. We introduce and compare var- ious notions of correlation, which in general we will refer to as image-level and proximity-based. The information about the most and the least correlated visual words can be exploited in order to adapt the similarity measure. The evaluation, preceding an experiment involving real users (future work), is performed within the Pseudo Relevance Feedback framework. We test our new method on three large data collections, namely MIRFlickr, ImageCLEF, and a collection from British National Geological Survey (BGS). The proposed model is computation- ally cheap and scalable to large image collections

    Automatic tagging and geotagging in video collections and communities

    Get PDF
    Automatically generated tags and geotags hold great promise to improve access to video collections and online communi- ties. We overview three tasks offered in the MediaEval 2010 benchmarking initiative, for each, describing its use scenario, definition and the data set released. For each task, a reference algorithm is presented that was used within MediaEval 2010 and comments are included on lessons learned. The Tagging Task, Professional involves automatically matching episodes in a collection of Dutch television with subject labels drawn from the keyword thesaurus used by the archive staff. The Tagging Task, Wild Wild Web involves automatically predicting the tags that are assigned by users to their online videos. Finally, the Placing Task requires automatically assigning geo-coordinates to videos. The specification of each task admits the use of the full range of available information including user-generated metadata, speech recognition transcripts, audio, and visual features

    Information extraction from multimedia web documents: an open-source platform and testbed

    No full text
    The LivingKnowledge project aimed to enhance the current state of the art in search, retrieval and knowledge management on the web by advancing the use of sentiment and opinion analysis within multimedia applications. To achieve this aim, a diverse set of novel and complementary analysis techniques have been integrated into a single, but extensible software platform on which such applications can be built. The platform combines state-of-the-art techniques for extracting facts, opinions and sentiment from multimedia documents, and unlike earlier platforms, it exploits both visual and textual techniques to support multimedia information retrieval. Foreseeing the usefulness of this software in the wider community, the platform has been made generally available as an open-source project. This paper describes the platform design, gives an overview of the analysis algorithms integrated into the system and describes two applications that utilise the system for multimedia information retrieval

    Concept hierarchy across languages in text-based image retrieval: a user evaluation

    Get PDF
    The University of Sheffield participated in Interactive ImageCLEF 2005 with a comparative user evaluation of two interfaces: one displaying search results as a list, the other organizing retrieved images into a hierarchy of concepts displayed on the interface as an interactive menu. Data was analysed with respect to effectiveness (number of images retrieved), efficiency (time needed) and user satisfaction (opinions from questionnaires). Effectiveness and efficiency were calculated at both 5 minutes (CLEF condition) and at final time. The list was marginally more effective than the menu at 5 minutes (no statistical significance) but the two were equal at final time showing the menu needs more time to be effectively used. The list was more efficient at both 5 minutes and final time, although the difference was not statistically significant. Users preferred the menu (75% vs. 25% for the list) indicating it to be an interesting and engaging feature. An inspection of the logs showed that 11% of effective terms (i.e. no stop-words, single terms) were not translated and that another 5% were ill translations. Some of those terms were used by all participants and were fundamental for some of the tasks. Non translated and ill translated terms negatively affected the search, hierarchy generation and, results display. More work has to be carried out to test the system under different setting, e.g. using a dictionary instead of MT that appears to be ineffective in translating users’ queries that rarely are grammatically correct. The evaluation also indicated directions for a new interface design that allows the user to check query translation (in both input and output) and that incorporates visual content image retrieval to improve result organization

    Bridging Cross-Modal Alignment for OCR-Free Content Retrieval in Scanned Historical Documents

    Get PDF
    In this work, we address the limitations of current approaches to document retrieval by incorporating vision-based topic extraction. While previous methods have primarily focused on visual elements or relied on optical character recognition (OCR) for text extraction, we propose a paradigm shift by directly incorporating vision into the topic space. We demonstrate that recognizing all visual elements within a document is unnecessary for identifying its underlying topic. Visual cues such as icons, writing style, and font can serve as sufficient indicators. By leveraging ranking loss functions and convolutional neural networks (CNNs), we learn complex topological representations that mimic the behavior of text representations. Our approach aims to eliminate the need for OCR and its associated challenges, including efficiency, performance, data-hunger, and expensive annotation. Furthermore, we highlight the significance of incorporating vision in historical documentation, where visually antiquated documents contain valuable cues. Our research contributes to the understanding of topic extraction from a vision perspective and offers insights into annotation-cheap document retrieval system
    corecore