35,976 research outputs found

    Unsupervised Graph-based Rank Aggregation for Improved Retrieval

    Full text link
    This paper presents a robust and comprehensive graph-based rank aggregation approach, used to combine results of isolated ranker models in retrieval tasks. The method follows an unsupervised scheme, which is independent of how the isolated ranks are formulated. Our approach is able to combine arbitrary models, defined in terms of different ranking criteria, such as those based on textual, image or hybrid content representations. We reformulate the ad-hoc retrieval problem as a document retrieval based on fusion graphs, which we propose as a new unified representation model capable of merging multiple ranks and expressing inter-relationships of retrieval results automatically. By doing so, we claim that the retrieval system can benefit from learning the manifold structure of datasets, thus leading to more effective results. Another contribution is that our graph-based aggregation formulation, unlike existing approaches, allows for encapsulating contextual information encoded from multiple ranks, which can be directly used for ranking, without further computations and post-processing steps over the graphs. Based on the graphs, a novel similarity retrieval score is formulated using an efficient computation of minimum common subgraphs. Finally, another benefit over existing approaches is the absence of hyperparameters. A comprehensive experimental evaluation was conducted considering diverse well-known public datasets, composed of textual, image, and multimodal documents. Performed experiments demonstrate that our method reaches top performance, yielding better effectiveness scores than state-of-the-art baseline methods and promoting large gains over the rankers being fused, thus demonstrating the successful capability of the proposal in representing queries based on a unified graph-based model of rank fusions

    Multimodal Classification of Urban Micro-Events

    Get PDF
    In this paper we seek methods to effectively detect urban micro-events. Urban micro-events are events which occur in cities, have limited geographical coverage and typically affect only a small group of citizens. Because of their scale these are difficult to identify in most data sources. However, by using citizen sensing to gather data, detecting them becomes feasible. The data gathered by citizen sensing is often multimodal and, as a consequence, the information required to detect urban micro-events is distributed over multiple modalities. This makes it essential to have a classifier capable of combining them. In this paper we explore several methods of creating such a classifier, including early, late, hybrid fusion and representation learning using multimodal graphs. We evaluate performance on a real world dataset obtained from a live citizen reporting system. We show that a multimodal approach yields higher performance than unimodal alternatives. Furthermore, we demonstrate that our hybrid combination of early and late fusion with multimodal embeddings performs best in classification of urban micro-events

    Visual Information Retrieval in Endoscopic Video Archives

    Get PDF
    In endoscopic procedures, surgeons work with live video streams from the inside of their subjects. A main source for documentation of procedures are still frames from the video, identified and taken during the surgery. However, with growing demands and technical means, the streams are saved to storage servers and the surgeons need to retrieve parts of the videos on demand. In this submission we present a demo application allowing for video retrieval based on visual features and late fusion, which allows surgeons to re-find shots taken during the procedure.Comment: Paper accepted at the IEEE/ACM 13th International Workshop on Content-Based Multimedia Indexing (CBMI) in Prague (Czech Republic) between 10 and 12 June 201

    Query generation from multiple media examples

    Get PDF
    This paper exploits an unified media document representation called feature terms for query generation from multiple media examples, e.g. images. A feature term refers to a value interval of a media feature. A media document is therefore represented by a frequency vector about feature term appearance. This approach (1) facilitates feature accumulation from multiple examples; (2) enables the exploration of text-based retrieval models for multimedia retrieval. Three statistical criteria, minimised chi-squared, minimised AC/DC rate and maximised entropy, are proposed to extract feature terms from a given media document collection. Two textual ranking functions, KL divergence and a BM25-like retrieval model, are adapted to estimate media document relevance. Experiments on the Corel photo collection and the TRECVid 2006 collection show the effectiveness of feature term based query in image and video retrieval

    Semantical representation and retrieval of natural photographs and medical images using concept and context-based feature spaces

    Get PDF
    The growth of image content production and distribution over the world has exploded in recent years. This creates a compelling need for developing innovative tools for managing and retrieving images for many applications, such as digital libraries, web image search engines, medical decision support systems, and so on. Until now, content-based image retrieval (CBIR) addresses the problem of finding images by automatically extracting low-level visual features, such as odor, texture, shape, etc. with limited success. The main limitation is due to the large semantic gap that currently exists between the high-level semantic concepts that users naturally associate with images and the low-level visual features that the system is relying upon. Research for the retrieval of images by semantic contents is still in its infancy. A successful solution to bridge or at least narrow the semantic gap requires the investigation of techniques from multiple fields. In addition, specialized retrieval solutions need to emerge, each of which should focus on certain types of image domains, users search requirements and applications objectivity. This work is motivated by a multi-disciplinary research effort and focuses on semantic-based image search from a domain perspective with an emphasis on natural photography and biomedical image databases. More precisely, we propose novel image representation and retrieval methods by transforming low-level feature spaces into concept-based feature spaces using statistical learning techniques. To this end, we perform supervised classification for modeling of semantic concepts and unsupervised clustering for constructing codebook of visual concepts to represent images in higher levels of abstraction for effective retrieval. Generalizing upon vector space model of Information Retrieval, we also investigate automatic query expansion techniques from a new perspective to reduce concept mismatch problem by analyzing their correlations information at both local and global levels in a collection. In addition, to perform retrieval in a complete semantic level, we propose an adaptive fusion-based retrieval technique in content and context-based feature spaces based on relevance feedback information from users. We developed a prototype image retrieval system as a part of the CINDI (Concordia INdexing and DIscovery system) digital library project, to perform exhaustive experimental evaluations and show the effectiveness of our retrieval approaches in both narrow and broad domains of application
    • …
    corecore