11 research outputs found

    Cloud-Based Benchmarking of Medical Image Analysis

    Get PDF
    Medical imagin

    Comparing Fusion Techniques for the ImageCLEF 2013 Medical Case Retrieval Task

    Get PDF
    Retrieval systems can supply similar cases with a proven diagnosis to a new example case under observation to help clinicians during their work. The ImageCLEFmed evaluation campaign proposes a framework where research groups can compare case-based retrieval approaches. This paper focuses on the case-based task and adds results of the compound figure separation and modality classification tasks. Several fusion approaches are compared to identify the approaches best adapted to the heterogeneous data of the task. Fusion of visual and textual features is analyzed, demonstrating that the selection of the fusion strategy can improve the best performance on the case-based retrieval task

    The medGIFT Group in ImageCLEFmed 2013

    Get PDF
    This article presents the participation of the medGIFT groupin ImageCLEFmed 2013. Since 2004, the group has participated in themedical image retrieval tasks of ImageCLEF each year. There are fourtypes of tasks for ImageCLEFmed 2013: modality classi cation, image{based retrieval, case{based retrieval and a new task on compound gureseparation. The medGIFT group participated in all four tasks. MedGIFTis developing a system named ParaDISE (Parallel Distributed ImageSearch Engine), which is the successor of GIFT (GNU Image FindingTool). The alpha version of ParaDISE was used to run the experimentsin the competition. The focus was on the use of multiple features incombinations with novel strategies, i.e, compound gure separation formodality classi cation or modality ltering for ad{hoc image and case{based retrieval

    Semi–Supervised Learning for Image Modality Classification

    Get PDF
    Searching for medical image content is a regular task for many physicians, especially in radiology. Retrieval of medical images from the scientific literature can benefit from automatic modality classification to focus the search and filter out non–relevant items. Training datasets are often unevenly distributed regarding the classes resulting sometimes in a less than optimal classification performance. This article proposes a semi–supervised learning approach applied using a k–Nearest Neighbour (k–NN) classifier to exploit unlabelled data and to expand the training set. The algorithmic implementation is described and the method is evaluated on the ImageCLEFmed modality classification benchmark. Results show that this approach achieves an improved performance over supervised k–NN and Random Forest classifiers. Moreover, medical case–based retrieval benefits from the modality filter

    Sistem dapatan semula imej untuk aplikasi perubatan

    Get PDF
    Dapatan semula imej (DSI) adalah sistem pencarian imej yang menggunakan ciri-ciri tertentu atau konteks khusus dalam sesuatu imej. Dalam bidang perubatan, sistem DSI digunakan untuk menyediakan imej yang diperlukan secara tepat dan pantas kepada pakar perubatan. Proses itu biasanya berlaku pada dan ketika diagnosis dan rawatan penyakit dilakukan. Sistem dapatan semula yang awal dan masih digunakan dengan meluas dalam bidang perubatan adalah sistem DSI berdasarkan teks (TBIRS). TBIRS menggunakan kata kunci dalam konteks sesuatu imej dan ia memerlukan anotasi teks secara manual. Proses anotasi teks adalah tugas yang memerihkan lebih-lebih lagi jika melibatkan pangkalan data yang besar. Ini memungkinkan kebarangkalian berlakunya kesilapan manusia adalah tinggi. Untuk mengatasi masalah yang dinyatakan, sistem DSI berdasarkan kandungan (CBIRS) dengan pengindeksan automatik adalah dicadangkan. Kaedah ini melibatkan pemprosesan imej perubatan berdasarkan komputer yang menggunakan fitur visual imej seperti warna, bentuk dan tesktur. Namun begitu, umum mengetahui bahawa suatu algoritma tertentu dalam CBIRS adalah khusus untuk satu modaliti sahaja dan melibatkan bahagian yang tertentu. Ini ditambahkan pula bahawa CBIRS telah mengabaikan persepsi manusia dalam tugas menakrif sesuatu imej dan akibatnya, menyebabkan wujudnya masalah jurang semantik. Oleh itu, sistem DSI hibrid (HBIRS) yang menggabungkan kekuatan kedua-dua TBIRS dan CBIRS telah diperkenalkan bagi menangani masalah jurang semantik khususnya dan sekaligus memantapkan sistem DSI amnya. Satu kerangka sistem DSI yang cekap iaitu HBIRS juga telah dicadangkan. Walau bagaimanapun, kajian ini hanya melibatkan TBIRS dan CBIRS bagi aplikasi perubatan, dan prototaip TBIRS yang dikaji menggunakan imej X-Ray turut dicadangkan

    Medical Image Retrieval Using Multimodal Semantic Indexing

    Get PDF
    Large collections of medical images have become a valuable source of knowledge, taking an important role in education, medical research and clinical decision making. An important unsolved issue that is actively investigated is the efficient and effective access to these repositories. This work addresses the problem of information retrieval in large collections of biomedical images, allowing to use sample images as alternative queries to the classic keywords. The proposed approach takes advantage of both modalities: text and visual information. The main drawback of the multimodal strategies is that the associated algorithms are memory and computation intensive. So, an important challenge addressed in this work is the design of scalable strategies, that can be applied efficiently and effectively in large medical image collections. The experimental evaluation shows that the proposed multimodal strategies are useful to improve the image retrieval performance, and are fully applicable to large image repositories.Maestrí

    Image Area Reduction for Efficient Medical Image Retrieval

    Get PDF
    Content-based image retrieval (CBIR) has been one of the most active areas in medical image analysis in the last two decades because of the steadily increase in the number of digital images used. Efficient diagnosis and treatment planning can be supported by developing retrieval systems to provide high-quality healthcare. Extensive research has attempted to improve the image retrieval efficiency. The critical factors when searching in large databases are time and storage requirements. In general, although many methods have been suggested to increase accuracy, fast retrieval has been rather sporadically investigated. In this thesis, two different approaches are proposed to reduce both time and space requirements for medical image retrieval. The IRMA data set is used to validate the proposed methods. Both methods utilized Local Binary Pattern (LBP) histogram features which are extracted from 14,410 X-ray images of IRMA dataset. The first method is image folding that operates based on salient regions in an image. Saliency is determined by a context-aware saliency algorithm which includes folding the image. After the folding process, the reduced image area is used to extract multi-block and multi-scale LBP features and to classify these features by multi-class Support vector machine (SVM). The other method consists of classification and distance-based feature similarity. Images are firstly classified into general classes by utilizing LBP features. Subsequently, the retrieval is performed within the class to locate the most similar images. Between the retrieval and classification processes, LBP features are eliminated by employing the error histogram of a shallow (n/p/n) autoencoder to quantify the retrieval relevance of image blocks. If the region is relevant, the autoencoder gives large error for its decoding. Hence, via examining the autoencoder error of image blocks, irrelevant regions can be detected and eliminated. In order to calculate similarity within general classes, the distance between the LBP features of relevant regions is calculated. The results show that the retrieval time can be reduced, and the storage requirements can be lowered without significant decrease in accuracy

    Use Case Oriented Medical Visual Information Retrieval & System Evaluation

    Get PDF
    Large amounts of medical visual data are produced daily in hospitals, while new imaging techniques continue to emerge. In addition, many images are made available continuously via publications in the scientific literature and can also be valuable for clinical routine, research and education. Information retrieval systems are useful tools to provide access to the biomedical literature and fulfil the information needs of medical professionals. The tools developed in this thesis can potentially help clinicians make decisions about difficult diagnoses via a case-based retrieval system based on a use case associated with a specific evaluation task. This system retrieves articles from the biomedical literature when querying with a case description and attached images. This thesis proposes a multimodal approach for medical case-based retrieval with focus on the integration of visual information connected to text. Furthermore, the ImageCLEFmed evaluation campaign was organised during this thesis promoting medical retrieval system evaluation

    Multimodal information spaces for content-based image retrieval

    Get PDF
    Abstract. Image collections today are increasingly larger in size, and they continue to grow constantly. Without the help of image search systems these abundant visual records collected in many different fields and domains may remain unused and inaccessible. Many available image databases often contain complementary modalities, such as attached text resources, which can be used to build an index for querying with keywords. However, sometimes users do not have or do not know the right words to express what they need, and, in addition, keywords do not express all the visual variations that an image may contain. Using example images as queries can be viewed as an alternative in different scenarios such as searching images using a mobile phone with a coupled camera, or supporting medical diagnosis by searching a large medical image collection. Still, matching only visual features between the query and image databases may lead to undesirable results from the user's perspective. These conditions make the process of finding relevant images for a specific information need very challenging, time consuming or even frustrating. Instead of considering only a single data modality to build image search indexes, the simultaneous use of both, visual and text data modalities, has been suggested. Non-visual information modalities may provide complementary information to enrich the image representation. The goal of this research work is to study the relationships between visual contents and text terms to build useful indexes for image search. A family of algorithms based on matrix factorization are proposed for extracting the multimodal aspects from an image collection. Using this knowledge about how visual features and text terms correlate, a search index is constructed, which can be searched using keywords, example images or combinations of both. Systematic experiments were conducted on different data sets to evaluate the proposed indexing algorithms. The experimental results showed that multimodal indexing is an effective strategy for designing image search systems.Las colecciones de imágenes hoy en día son muy grandes y crecen constantemente. Sin la ayuda de sistemas para la búsqueda de imágenes esos abundantes registros visuales que han sido recolectados en diferentes areas del conocimiento pueden permanecer aislados sin uso. Muchas bases de datos de imágenes contienen modalidades de datos complementarias, como los recursos textuales que pueden ser utilizados para crear índices de búsqueda. Sin embargo, algunas veces los usuarios no tienen o no saben qué palabras utilizar para encontrar lo que necesitan, y adicionalmente, las palabras clave no expresan todas las variaciones visuales que una imagen puede tener. Utilizar imágenes de ejemplo para expresar la consulta puede ser visto como una alternativa, por ejemplo buscar imágenes con teléfonos móviles, o dar soporte al diagnóstico médico con las imágenes de los pacientes. Aún así, emparejar correctamente las características visuales de la consulta y las imágenes en la base de datos puede llevar a resultados semánticamente incorrectos. Estas condiciones hacen que el proceso de buscar imágenes relevantes para una necesidad de información particular sea una tarea difícil, que consume mucho tiempo o que incluso puede ser frustrante. En lugar de considerar solo una modalidad de datos para construir índices de búsqueda para imágenes, el uso simultáneo de las modalidades visual y textual ha sido sugerido. Las modalidades no visuales pueden proporcionar información complementaria para enriquecer la representación de las imágenes. El objetivo de este trabajo de investigación es estudiar las relaciones entre los contenidos visuales y los términos textuales, para construir índices de búsqueda útiles. Este trabajo propone una familia de algoritmos basados en factorización de matrices para extraer los aspectos multimodales de una colección de imágenes. Utilizando este conocimiento acerca de cómo las características visuales se correlacionan con los términos textuales, se construye un índice que puede ser consultado con palabras clave, imágenes de ejemplo o por combinaciones de estas dos. Se realizaron experimentos sistemáticos en diferentes conjuntos de datos para evaluar los algoritmos de indexamiento propuestos. Los resultados muestran que el indexamiento multimodal es una estrategia efectiva para diseñar sistemas de búsqueda de imágenes.Doctorad
    corecore