50 research outputs found

    Overview of the 2005 cross-language image retrieval track (ImageCLEF)

    Get PDF
    The purpose of this paper is to outline efforts from the 2005 CLEF crosslanguage image retrieval campaign (ImageCLEF). The aim of this CLEF track is to explore the use of both text and content-based retrieval methods for cross-language image retrieval. Four tasks were offered in the ImageCLEF track: a ad-hoc retrieval from an historic photographic collection, ad-hoc retrieval from a medical collection, an automatic image annotation task, and a user-centered (interactive) evaluation task that is explained in the iCLEF summary. 24 research groups from a variety of backgrounds and nationalities (14 countries) participated in ImageCLEF. In this paper we describe the ImageCLEF tasks, submissions from participating groups and summarise the main fndings

    Overview of the ImageCLEF 2006 Photographic Retrieval and Object Annotation Tasks.

    Get PDF
    This paper describes the general photographic retrieval and object annotation tasks of the ImageCLEF 2006 evaluation campaign. These tasks provided both the resources and the framework necessary to perform comparative laboratory-style evaluation of visual information systems for image retrieval and automatic image annotation. Both tasks offered something new for 2006 and attracted a large number of submissions: 12 groups participated in ImageCLEFphoto and 3 groups in the automatic annotation task. This paper summarises these two tasks including collections used in the benchmark, the tasks proposed, a summary of submissions from participating groups and the main findings

    Patch-level spatial layout for classification and weakly supervised localization

    Get PDF
    International audienceWe propose a discriminative patch-level spatial layout model suitable for training with weak supervision. We start from a block-sparse model of patch appearance based on the normalized Fisher vector representation. The appearance model is responsible for i) selecting a discriminative subset of visual words, and ii) identifying distinctive patches assigned to the selected subset. These patches are further filtered by a sparse spatial model operating on a novel representation of pairwise patch layout. We have evaluated the proposed pipeline in image classification and weakly supervised localization experiments on a public traffic sign dataset. The results show significant advantage of the proposed spatial model over state of the art appearance models

    Image retrieval and annotation using maximum entropy

    No full text
    Abstract. We present and discuss our participation in the four tasks of the ImageCLEF 2006 Evaluation. In particular, we present a novel approach to learn feature weights in our content-based image retrieval system FIRE. Given a set of training images with known relevance among each other, the retrieval task is reformulated as a classification task and then the weights to combine a set of features are trained discriminatively using the maximum entropy framework. Experimental results for the medical retrieval task show large improvements over heuristically chosen weights. Furthermore the maximum entropy approach is used for the automatic image annotation tasks in combination with a part-based object model. Using our object classification methods, we obtained the best results in the medical and in the object annotation task.

    Edge Boxes: Locating Object Proposals from Edges

    No full text

    Medical image retrieval using texture, locality and colour

    No full text
    We describe our experiments for the Image CLEF medical retrieval task. Our efforts were focused on the initial visual search. A content-based approach was followed. We used texture, localisation and colour features that have been proven by previous experiments. The images in the collection had specific characteristics. Medical images have a formulaic composition for each modality and anatomic region. We were able to choose features that would perform well in this domain. Tiling a Gabor texture feature to add localisation information proved to be particularly effective. The distances from each feature were combined with equal weighting. This smoothed the performance across the queries. The retrieval results showed that this simple approach was successful, with our system coming third in the automatic retrieval task

    Overview of the ImageCLEFmed 2006 Medical Retrieval and Medical Annotation Tasks.

    Get PDF
    This paper describes the medical image retrieval and annotation tasks of ImageCLEF 2006. Both tasks are described with respect to goals, databases, topics, results, and techniques. The ImageCLEFmed retrieval task had 12 participating groups (100 runs). Most runs were automatic, with only a few manual or interactive. Purely textual runs were in the majority compared to purely visual runs but most were mixed, using visual and textual information. None of the manual or interactive techniques were significantly better than automatic runs. The best–performing systems used visual and textual techniques combined, but combinations of visual and textual features often did not improve performance. Purely visual systems only performed well on visual topics. The medical automatic annotation used a larger database of 10,000 training images from 116 classes, up from 9,000 images from 57 classes in 2005. Twelve groups submitted 28 runs. Despite the larger number of classes, results were almost as good as in 2005 which demonstrates a clear improvement in performance. The best system of 2005 would have received a position in the middle in 2006
    corecore