40 research outputs found

    Medical Case Retrieval

    Get PDF
    ABSTRACT The proposed PhD project addresses the problem of finding descriptions of diseases or patients' health records that are relevant for a given description of patient's symptoms, also known as medical case retrieval (MCR). Designing an automatic multimodal MCR system applicable to general medical data sets still presents an open research problem, as indicated by the ImageCLEF 2013 MCR challenge, where the best submitted runs achieved only moderate retrieval performance and used purely textual techniques. This project therefore aims at designing a multimodal MCR model that is capable of achieving a substantially better retrieval performance on the ImageCLEF data set than state-of-the-art techniques. Moreover, the potential of further improvement by leveraging relevance feedback of medical expert users for long-term learning will be investigated

    Multimodal medical case retrieval using the Dezert-Smarandache theory.

    No full text
    International audienceMost medical images are now digitized and stored with semantic information, leading to medical case databases. They may be used for aid to diagnosis, by retrieving similar cases to those in examination. But the information are often incomplete, uncertain and sometimes conflicting, so difficult to use. In this paper, we present a Case Based Reasoning (CBR) system for medical case retrieval, derived from the Dezert-Smarandache theory, which is well suited to handle those problems. We introduce a case retrieval specific frame of discernment theta, which associates each element of theta with a case in the database; we take advantage of the flexibility offered by the DSmT's hybrid models to finely model the database. The system is designed so that heterogeneous sources of information can be integrated in the system: in particular images, indexed by their digital content, and symbolic information. The method is evaluated on two classified databases: one for diabetic retinopathy follow-up (DRD) and one for screening mammography (DDSM). On these databases, results are promising: the retrieval precision at five reaches 81.8% on DRD and 84.8% on DDSM

    Comparing Fusion Techniques for the ImageCLEF 2013 Medical Case Retrieval Task

    Get PDF
    Retrieval systems can supply similar cases with a proven diagnosis to a new example case under observation to help clinicians during their work. The ImageCLEFmed evaluation campaign proposes a framework where research groups can compare case-based retrieval approaches. This paper focuses on the case-based task and adds results of the compound figure separation and modality classification tasks. Several fusion approaches are compared to identify the approaches best adapted to the heterogeneous data of the task. Fusion of visual and textual features is analyzed, demonstrating that the selection of the fusion strategy can improve the best performance on the case-based retrieval task

    Visual Information Retrieval in Endoscopic Video Archives

    Get PDF
    In endoscopic procedures, surgeons work with live video streams from the inside of their subjects. A main source for documentation of procedures are still frames from the video, identified and taken during the surgery. However, with growing demands and technical means, the streams are saved to storage servers and the surgeons need to retrieve parts of the videos on demand. In this submission we present a demo application allowing for video retrieval based on visual features and late fusion, which allows surgeons to re-find shots taken during the procedure.Comment: Paper accepted at the IEEE/ACM 13th International Workshop on Content-Based Multimedia Indexing (CBMI) in Prague (Czech Republic) between 10 and 12 June 201

    Case Retrieval using Bhattacharya Coefficient with Particle Swarm Optimization

    Get PDF
    Now a day, health information management and utilization is the demanding task to health informaticians for delivering the eminence healthcare services. Extracting the similar cases from the case database can aid the doctors to recognize the same kind of patients and their treatment details. Accordingly, this paper introduces the method called H-BCF for retrieving the similar cases from the case database. Initially, the patient’s case database is constructed with details of different patients and their treatment details. If the new patient comes for treatment, then the doctor collects the information about that patient and sends the query to the H-BCF. The H-BCF system matches the input query with the patient’s case database and retrieves the similar cases. Here, the PSO algorithm is used with the BCF for retrieving the most similar cases from the patient’s case database. Finally, the Doctor gives treatment to the new patient based on the retrieved cases. The performance of the proposed method is analyzed with the existing methods, such as PESM, FBSO-neural network, and Hybrid model for the performance measures accuracy and F-Measure. The experimental results show that the proposed method attains the higher accuracy of 99.5% and the maximum F-Measure of 99% when compared to the existing methods

    Content–based fMRI Brain Maps Retrieval

    Get PDF
    The statistical analysis of functional magnetic resonance imaging (fMRI) is used to extract functional data of cerebral activation during a given experimental task. It allows for assessing changes in cerebral function related to cerebral activities. This methodology has been widely used and a few initiatives aim to develop shared data resources. Searching these data resources for a specific research goal remains a challenging problem. In particular, work is needed to create a global content–based (CB) fMRI retrieval capability. This work presents a CB fMRI retrieval approach based on the brain activation maps extracted using Probabilistic Independent Component Analysis (PICA). We obtained promising results on data from a variety of experiments which highlight the potential of the system as a tool that provides support for finding hidden similarities between brain activation maps

    Multimedia data mining for automatic diabetic retinopathy screening

    No full text
    International audience— This paper presents TeleOphta, an automatic sys-tem for screening diabetic retinopathy in teleophthalmology networks. Its goal is to reduce the burden on ophthalmologists by automatically detecting non referable examination records, i.e. examination records presenting no image quality problems and no pathological signs related to diabetic retinopathy or any other retinal pathology. TeleOphta is an attempt to put into practice years of algorithmic developments from our groups. It combines image quality metrics, specific lesion detectors and a generic pathological pattern miner to process the visual content of eye fundus photographs. This visual information is further combined with contextual data in order to compute an abnormality risk for each examination record. The TeleOphta system was trained and tested on a large dataset of 25,702 examination records from the OPHDIAT screening network in Paris. It was able to automatically detect 68% of the non referable examination records while achieving the same sensitivity as a second ophthalmologist. This suggests that it could safely reduce the burden on ophthalmologists by 56%

    Multimodal non-linear latent semantic method for information retrieval

    Get PDF
    La búsqueda y recuperación de datos multimodales es una importante tarea dentro del campo de búsqueda y recuperación de información, donde las consultas y los elementos de la base de datos objetivo están representados por un conjunto de modalidades, donde cada una de ellas captura un aspecto de un fenómeno de interés. Cada modalidad contiene información complementaria y común a otras modalidades. Con el fin de tomar ventaja de la información adicional distribuida a través de las distintas modalidades han sido desarrollados muchos algoritmos y métodos que utilizan las propiedades estadísticas en los datos multimodales para encontrar correlaciones implícitas, otros aprenden a calcular distancias heterogéneas, otros métodos aprenden a proyectar los datos desde el espacio de entrada hasta un espacio semántico común, donde las diferentes modalidades son comparables y se puede construir un ranking a partir de ellas. En esta tesis se presenta el diseño de un sistema para la búsqueda y recuperación de información multimodal que aprende varias proyecciones no lineales a espacios semánticos latentes donde las distintas modalidades son representadas en conjunto y es posible realizar comparaciones y medidas de similitud para construir rankings multimodales. Adicionalmente se propone un método kernelizado para la proyección de datos a un espacio semántico latente usando la información de las etiquetas como método de supervisión para construir índice multimodal que integra los datos multimodales y la información de las etiquetas; este método puede proyectar los datos a tres diferentes espacios semánticos donde varias configuraciones de búsqueda y recuperación de información pueden ser aplicadas. El sistema y el método propuestos fueron evaluados en un conjunto de datos compuesto por casos médicos, donde cada caso consta de una imagen de tejido prostático, un reporte de texto del patólogo y un valor de Gleason score como etiqueta de supervisión. Combinando la información multimodal y la información en las etiquetas se generó un índice multimodal que se utilizó para realizar la tarea de búsqueda y recuperación de información por contenido obteniendo resultados sobresalientes. Las proyecciones no-lineales permiten al modelo una mayor flexibilidad y capacidad de representación. Sin embargo calcular estas proyecciones no-lineales en un conjunto de datos enorme es computacionalmente costoso, para reducir este costo y habilitar el modelo para procesar datos a gran escala, la técnica del budget fue utilizada, mostrando un buen compromiso entre efectividad y velocidad.Multimodal information retrieval is an information retrieval sub-task where queries and database target elements are composed of several modalities or views. A modality is a representation of complex phenomena, captured and measured by different sensors or information sources, each one encodes some information about it. Each modality representation contains complementary and shared information about the phenomenon of interest, this additional information can be used to improve the information retrieval process. Several methods have been developed to take advantage of additional information distributed across different modalities. Some of them exploit statistical properties in multimodal data to find correlations and implicit relationships, others learn heterogeneous distance functions, and others learn linear and non-linear projections that transform data from the original input space to a common latent semantic space where different modalities are comparable. In spite of the attention dedicated to this issue, multimodal information retrieval is still an open problem. This thesis presents a multimodal information retrieval system designed to learn several mapping functions to transform multimodal data to a latent semantic space, where different modalities are combined and can be compared to build a multimodal ranking and perform a multimodal information retrieval task. Additionally, a multimodal kernelized latent semantic embedding method is proposed to construct a supervised multimodal index, integrating multimodal data and label supervision. This method can perform mappings to three different spaces where some information retrieval task setups can be performed. The proposed system and method were evaluated in a multimodal medical case-based retrieval task where data is composed of whole-slide images of prostate tissue samples, pathologist’s text report and Gleason score as a supervised label. Multimodal data and labels were combined to produce a multimodal index. This index was used to retrieve multimodal information and achieves outstanding results compared with previous works on this topic. Non-linear mappings provide more flexibility and representation capacity to the proposed model. However, constructing the non-linear mapping in a large dataset using kernel methods can be computationally costly. To reduce the cost and allow large scale applications, the budget technique was introduced, showing good performance between speed and effectiveness.COLCIENCIASJóvenes investigadores 761/2016Línea de investigación: Ciencias de la computaciónMaestrí

    A MEDICAL X-RAY IMAGE CLASSIFICATION AND RETRIEVAL SYSTEM

    Get PDF
    Medical image retrieval systems have gained high interest in the scientific community due to the advances in medical imaging technologies. The semantic gap is one of the biggest challenges in retrieval from large medical databases. This paper presents a retrieval system that aims at addressing this challenge by learning the main concept of every image in the medical database. The proposed system contains two modules: a classification/annotation and a retrieval module. The first module aims at classifying and subsequently annotating all medical images automatically. SIFT (Scale Invariant Feature Transform) and LBP (Local Binary Patterns) are two descriptors used in this process. Image-based and patch-based features are used as approaches to build a bag of words (BoW) using these descriptors. The impact on the classification performance is also evaluated. The results show that the classification accuracy obtained incorporating image-based integration techniques is higher than the accuracy obtained by other techniques. The retrieval module enables the search based on text, visual and multimodal queries. The text-based query supports retrieval of medical images based on categories, as it is carried out via the category that the images were annotated with, within the classification module. The multimodal query applies a late fusion technique on the retrieval results obtained from text-based and image-based queries. This fusion is used to enhance the retrieval performance by incorporating the advantages of both text-based and content-based image retrieval

    Overview of the ImageCLEF 2013 medical tasks

    Get PDF
    In 2013, the tenth edition of the medical task of the Image-CLEF benchmark was organized. For the first time, the ImageCLEFmedworkshop takes place in the United States of America at the annualAMIA (American Medical Informatics Association) meeting even thoughthe task was organized as in previous years in connection with the otherImageCLEF tasks. Like 2012, a subset of the open access collection ofPubMed Central was distributed. This year, there were four subtasks:modality classification, compound figure separation, image–based andcase–based retrieval. The compound figure separation task was includeddue to the large number of multipanel images available in the literatureand the importance to separate them for targeted retrieval. More com-pound figures were also included in the modality classification task tomake it correspond to the distribution in the full database. The retrievaltasks remained in the same format as in previous years but a largernumber of tasks were available for image–based and case–based tasks.This paper presents an analysis of the techniques applied by the tengroups participating 2013 in ImageCLEFmed
    corecore