803 research outputs found

    Discriminative cue integration for medical image annotation

    Get PDF
    Automatic annotation of medical images is an increasingly important tool for physicians in their daily activity. Hospitals nowadays produce an increasing amount of data. Manual annotation is very costly and prone to human mistakes. This paper proposes a multi-cue approach to automatic medical image annotation. We represent images using global and local features. These cues are then combined using three alternative approaches, all based on the Support Vector Machine algorithm. We tested our methods on the IRMA database, and with two of the three approaches proposed here we participated in the 2007 ImageCLEFmed benchmark evaluation, in the medical image annotation track. These algorithms ranked first and fifth respectively among all submission. Experiments using the third approach also confirm the power of cue integration for this task

    CLEF2007 Image Annotation Task: an SVM-based Cue Integration Approach

    Get PDF
    This paper presents the algorithms and results of our participation to the medical image annotation task of ImageCLEFmed 2007. We proposed, as a general strategy, a multi-cue approach where images are represented both by global and local descrip- tors, so to capture di®erent types of information. These cues are combined during the classi¯cation step following two alternative SVM-based strategies. The ¯rst algorithm, called Discriminative Accumulation Scheme (DAS), trains an SVM for each feature type, and considers as output of each classi¯er the distance from the separating hyper- plane. The ¯nal decision is taken on a linear combination of these distances: in this way cues are accumulated, thus even when they both are misleaded the ¯nal result can be correct. The second algorithm uses a new Mercer kernel that can accept as input di®erent feature types while keeping them separated. In this way, cues are selected and weighted, for each class, in a statistically optimal fashion. We call this approach Multi Cue Kernel (MCK). We submitted several runs, testing the performance of the single-cue SVM and of the two cue integration methods. Our team was called BLOOM (BLance°Or-tOMed.im2) from the name of our sponsors. The DAS algorithm obtained a score of 29.9, which ranked ¯fth among all submissions. We submitted two versions of the MCK algorithm, one using the one-vs-all multiclass extension of SVMs and the other using the one-vs-one extension. They scored respectively 26.85 and 27.54, ranking ¯rst and second among all submissions

    CLEF2008 Image Annotation Task: an SVM Confidence-Based Approach

    Get PDF
    This paper presents the algorithms and results of our participation to the medi- cal image annotation task of ImageCLEFmed 2008. Our previous experience in the same task in 2007 suggests that combining multiple cues with different SVM-based approaches is very effective in this domain. Moreover it points out that local features are the most discriminative cues for the problem at hand. On these basis we decided to integrate two different local structural and textural descriptors. Cues are combined through simple concatenation of the feature vectors and through the Multi-Cue Ker- nel. The trickiest part of the challenge this year was annotating images coming mainly from classes with only few examples in the training set. We tackled the problem on two fronts: (1) we introduced a further integration strategy using SVM as an opinion maker. It consists in combining the first two opinions on the basis of a technique to evaluate the confidence of the classifier’s decisions. This approach produces class labels with “don’t know” wildcards opportunely placed; (2) we enriched the poorly populated training classes adding virtual examples generated slightly modifying the original images. We submitted several runs considering different combination of the proposed techniques. Our team was called “idiap”. The run using jointly the low cue- integration technique, the confidence-based opinion fusion and the virtual examples, scored 74.92 ranking first among all submissions

    An SVM Confidence-Based Approach to Medical Image Annotation

    Get PDF
    This paper presents the algorithms and results of the “idiap” team participation to the ImageCLEFmed annotation task in 2008. On the basis of our successful experience in 2007 we decided to integrate two different local structural and textural descriptors. Cues are com- bined through concatenation of feature vectors and through the Multi- Cue Kernel. The challenge this year was to annotate images coming mainly from classes with only few training examples. We tackled the problem on two fronts: (1) we introduced a further integration strategy using SVM as an opinion maker; (2) we enriched the poorly populated classes adding virtual examples. We submitted several runs considering different combinations of the proposed techniques. The run jointly using the feature concatenation, the confidence-based opinion fusion and the virtual examples ranked first among all submissions

    Hierarchical Object Parsing from Structured Noisy Point Clouds

    Full text link
    Object parsing and segmentation from point clouds are challenging tasks because the relevant data is available only as thin structures along object boundaries or other features, and is corrupted by large amounts of noise. To handle this kind of data, flexible shape models are desired that can accurately follow the object boundaries. Popular models such as Active Shape and Active Appearance models lack the necessary flexibility for this task, while recent approaches such as the Recursive Compositional Models make model simplifications in order to obtain computational guarantees. This paper investigates a hierarchical Bayesian model of shape and appearance in a generative setting. The input data is explained by an object parsing layer, which is a deformation of a hidden PCA shape model with Gaussian prior. The paper also introduces a novel efficient inference algorithm that uses informed data-driven proposals to initialize local searches for the hidden variables. Applied to the problem of object parsing from structured point clouds such as edge detection images, the proposed approach obtains state of the art parsing errors on two standard datasets without using any intensity information.Comment: 13 pages, 16 figure

    Towards the improvement of textual anatomy image classification using image local features

    Full text link

    MURPHY: Relations Matter in Surgical Workflow Analysis

    Full text link
    Autonomous robotic surgery has advanced significantly based on analysis of visual and temporal cues in surgical workflow, but relational cues from domain knowledge remain under investigation. Complex relations in surgical annotations can be divided into intra- and inter-relations, both valuable to autonomous systems to comprehend surgical workflows. Intra- and inter-relations describe the relevance of various categories within a particular annotation type and the relevance of different annotation types, respectively. This paper aims to systematically investigate the importance of relational cues in surgery. First, we contribute the RLLS12M dataset, a large-scale collection of robotic left lateral sectionectomy (RLLS), by curating 50 videos of 50 patients operated by 5 surgeons and annotating a hierarchical workflow, which consists of 3 inter- and 6 intra-relations, 6 steps, 15 tasks, and 38 activities represented as the triplet of 11 instruments, 8 actions, and 16 objects, totaling 2,113,510 video frames and 12,681,060 annotation entities. Correspondingly, we propose a multi-relation purification hybrid network (MURPHY), which aptly incorporates novel relation modules to augment the feature representation by purifying relational features using the intra- and inter-relations embodied in annotations. The intra-relation module leverages a R-GCN to implant visual features in different graph relations, which are aggregated using a targeted relation purification with affinity information measuring label consistency and feature similarity. The inter-relation module is motivated by attention mechanisms to regularize the influence of relational features based on the hierarchy of annotation types from the domain knowledge. Extensive experimental results on the curated RLLS dataset confirm the effectiveness of our approach, demonstrating that relations matter in surgical workflow analysis
    corecore