127,395 research outputs found

    Medical Image Retrieval Using Pretrained Embeddings

    Full text link
    A wide range of imaging techniques and data formats available for medical images make accurate retrieval from image databases challenging. Efficient retrieval systems are crucial in advancing medical research, enabling large-scale studies and innovative diagnostic tools. Thus, addressing the challenges of medical image retrieval is essential for the continued enhancement of healthcare and research. In this study, we evaluated the feasibility of employing four state-of-the-art pretrained models for medical image retrieval at modality, body region, and organ levels and compared the results of two similarity indexing approaches. Since the employed networks take 2D images, we analyzed the impacts of weighting and sampling strategies to incorporate 3D information during retrieval of 3D volumes. We showed that medical image retrieval is feasible using pretrained networks without any additional training or fine-tuning steps. Using pretrained embeddings, we achieved a recall of 1 for various tasks at modality, body region, and organ level.Comment: 8 pages, 3 figures, 4 table

    Medical Images Retrieval using Clustering Technique

    Get PDF
    The past few years have witnessed an increasing of media rich digital data such as images, videos, and audios. Medical domain is one of the rich images data domains. Retrieving images from large and varied collections of medical images databases is a challenging and important problem. So, image retrieval is one of the fastest and most exciting growing research areas in the field of medical imaging. In this research, clustering technique has been used to group similar medical images in the database together to decrease the retrieval time of the searched image. The searched image has been searched through different clusters to find the nearest cluster, and then the matched image or the nearest set of images has been searched inside that cluster. Different experiments had been applied to study the effect of changing number of clusters in the retrieval time and the correctness of the retrieved images, also the effect of changing threshold value on the correctness of the retrieved images has been studied. DOI: 10.17762/ijritcc2321-8169.150513

    Medical image retrieval for augmenting diagnostic radiology

    Get PDF
    Even though the use of medical imaging to diagnose patients is ubiquitous in clinical settings, their interpretations are still challenging for radiologists. Many factors make this interpretation task difficult, one of which is that medical images sometimes present subtle clues yet are crucial for diagnosis. Even worse, on the other hand, similar clues could indicate multiple diseases, making it challenging to figure out the definitive diagnoses. To help radiologists quickly and accurately interpret medical images, there is a need for a tool that can augment their diagnostic procedures and increase efficiency in their daily workflow. A general-purpose medical image retrieval system can be such a tool as it allows them to search and retrieve similar cases that are already diagnosed to make comparative analyses that would complement their diagnostic decisions. In this thesis, we contribute to developing such a system by proposing approaches to be integrated as modules of a single system, enabling it to handle various information needs of radiologists and thus augment their diagnostic processes during the interpretation of medical images. We have mainly studied the following retrieval approaches to handle radiologists’different information needs; i) Retrieval Based on Contents, ii) Retrieval Based on Contents, Patients’ Demographics, and Disease Predictions, and iii) Retrieval Based on Contents and Radiologists’ Text Descriptions. For the first study, we aimed to find an effective feature representation method to distinguish medical images considering their semantics and modalities. To do that, we have experimented different representation techniques based on handcrafted methods (mainly texture features) and deep learning (deep features). Based on the experimental results, we propose an effective feature representation approach and deep learning architectures for learning and extracting medical image contents. For the second study, we present a multi-faceted method that complements image contents with patients’ demographics and deep learning-based disease predictions, making it able to identify similar cases accurately considering the clinical context the radiologists seek. For the last study, we propose a guided search method that integrates an image with a radiologist’s text description to guide the retrieval process. This method guarantees that the retrieved images are suitable for the comparative analysis to confirm or rule out initial diagnoses (the differential diagnosis procedure). Furthermore, our method is based on a deep metric learning technique and is better than traditional content-based approaches that rely on only image features and, thus, sometimes retrieve insignificant random images

    An exploratory study of user-centered indexing of published biomedical images

    Get PDF
    User-centered image indexing—often reported in research on collaborative tagging, social classification, folksonomy, or personal tagging—has received a considerable amount of attention [1-7]. The general themes in more recent studies on this topic include user-centered tagging behavior by types of images, pros and cons of user-created tags as compared to controlled index terms; assessment of the value added by user-generated tags, and comparison of automatic indexing versus human indexing in the context of web digital image collections such as Flickr. For instance, Golbeck\u27s finding restates the importance of indexer experience, order, and type of images [8]. Rorissa has found a significant difference in the number of terms assigned when using Flickr tags or index terms on the same image collection, which might suggest a difference in level of indexing by professional indexers and Flickr taggers [9]. Studies focusing on users and their tagging experiences and user-generated tags suggest ideas to be implemented as part of a personalized, customizable tagging system. Additionally, Stvilia and her colleagues have found that tagger age and image familiarity are negatively related, while indexing and tagging experience were positively associated [10]. A major question for biomedical image indexing is whether the results of the aforementioned studies, all of which dealt with general image collections, are applicable to images in the medical domain. In spite of the importance of visual material in medical education and the prevalence of digitized images in formal medical practice and education, medical students have few opportunities to annotate biomedical images. End-user training could improve the quality of image indexing and so improve retrieval. In a pilot assessment of image indexing and retrieval quality by medical students, this study compared concept completion and retrieval effectiveness of indexing terms generated by medical students on thirty-nine histology images selected from the PubMed Central (PMC) database. Indexing instruction was only given to an intervention group to test its impact on the quality of end-user image indexing

    Large-scale analysis, management, and retrieval of biological and medical images

    Get PDF
    Biomedical image data have been growing quickly in volume, speed, and complexity, and there is an increasing reliance on the analysis of these data. Biomedical scientists are in need of efficient and accurate analyses of large-scale imaging data, as well as innovative retrieval methods for visually similar imagery across a large-scale data collection to assist complex study in biological and medical applications. Moreover, biomedical images rely on increased resolution to capture subtle phenotypes of diseases, but this poses a challenge for clinicians to sift through haystacks of visual cues to make informative diagnoses. To tackle these challenges, we developed computational methods for large-scale analysis of biological and medical imaging data using simulated annealing to improve the quality of image feature extraction. Furthermore, we designed a Big Data infrastructure for the large-scale image analysis and retrieval of digital pathology images and conducted a longitudinal study of clinician's usage patterns of an image database management system (MDID) to shed light on the potential adoption of new informatics tools. This research also resulted in image analysis, management, and retrieval applications relevant to dermatology, radiology, pathology, life sciences, and palynology disciplines. These tools provide the potential to answer research questions that would not be answerable without our novel innovations that take advantage of Big Data technologies

    Saliency-Enhanced Content-Based Image Retrieval for Diagnosis Support in Dermatology Consultation: Reader Study.

    Get PDF
    BACKGROUND Previous research studies have demonstrated that medical content image retrieval can play an important role by assisting dermatologists in skin lesion diagnosis. However, current state-of-the-art approaches have not been adopted in routine consultation, partly due to the lack of interpretability limiting trust by clinical users. OBJECTIVE This study developed a new image retrieval architecture for polarized or dermoscopic imaging guided by interpretable saliency maps. This approach provides better feature extraction, leading to better quantitative retrieval performance as well as providing interpretability for an eventual real-world implementation. METHODS Content-based image retrieval (CBIR) algorithms rely on the comparison of image features embedded by convolutional neural network (CNN) against a labeled data set. Saliency maps are computer vision-interpretable methods that highlight the most relevant regions for the prediction made by a neural network. By introducing a fine-tuning stage that includes saliency maps to guide feature extraction, the accuracy of image retrieval is optimized. We refer to this approach as saliency-enhanced CBIR (SE-CBIR). A reader study was designed at the University Hospital Zurich Dermatology Clinic to evaluate SE-CBIR's retrieval accuracy as well as the impact of the participant's confidence on the diagnosis. RESULTS SE-CBIR improved the retrieval accuracy by 7% (77% vs 84%) when doing single-lesion retrieval against traditional CBIR. The reader study showed an overall increase in classification accuracy of 22% (62% vs 84%) when the participant is provided with SE-CBIR retrieved images. In addition, the overall confidence in the lesion's diagnosis increased by 24%. Finally, the use of SE-CBIR as a support tool helped the participants reduce the number of nonmelanoma lesions previously diagnosed as melanoma (overdiagnosis) by 53%. CONCLUSIONS SE-CBIR presents better retrieval accuracy compared to traditional CBIR CNN-based approaches. Furthermore, we have shown how these support tools can help dermatologists and residents improve diagnosis accuracy and confidence. Additionally, by introducing interpretable methods, we should expect increased acceptance and use of these tools in routine consultation

    Saliency-Enhanced Content-Based Image Retrieval for Diagnosis Support in Dermatology Consultation: Reader Study

    Full text link
    BACKGROUND Previous research studies have demonstrated that medical content image retrieval can play an important role by assisting dermatologists in skin lesion diagnosis. However, current state-of-the-art approaches have not been adopted in routine consultation, partly due to the lack of interpretability limiting trust by clinical users. OBJECTIVE This study developed a new image retrieval architecture for polarized or dermoscopic imaging guided by interpretable saliency maps. This approach provides better feature extraction, leading to better quantitative retrieval performance as well as providing interpretability for an eventual real-world implementation. METHODS Content-based image retrieval (CBIR) algorithms rely on the comparison of image features embedded by convolutional neural network (CNN) against a labeled data set. Saliency maps are computer vision-interpretable methods that highlight the most relevant regions for the prediction made by a neural network. By introducing a fine-tuning stage that includes saliency maps to guide feature extraction, the accuracy of image retrieval is optimized. We refer to this approach as saliency-enhanced CBIR (SE-CBIR). A reader study was designed at the University Hospital Zurich Dermatology Clinic to evaluate SE-CBIR's retrieval accuracy as well as the impact of the participant's confidence on the diagnosis. RESULTS SE-CBIR improved the retrieval accuracy by 7% (77% vs 84%) when doing single-lesion retrieval against traditional CBIR. The reader study showed an overall increase in classification accuracy of 22% (62% vs 84%) when the participant is provided with SE-CBIR retrieved images. In addition, the overall confidence in the lesion's diagnosis increased by 24%. Finally, the use of SE-CBIR as a support tool helped the participants reduce the number of nonmelanoma lesions previously diagnosed as melanoma (overdiagnosis) by 53%. CONCLUSIONS SE-CBIR presents better retrieval accuracy compared to traditional CBIR CNN-based approaches. Furthermore, we have shown how these support tools can help dermatologists and residents improve diagnosis accuracy and confidence. Additionally, by introducing interpretable methods, we should expect increased acceptance and use of these tools in routine consultation

    Discovering a Domain Knowledge Representation for Image Grouping: Multimodal Data Modeling, Fusion, and Interactive Learning

    Get PDF
    In visually-oriented specialized medical domains such as dermatology and radiology, physicians explore interesting image cases from medical image repositories for comparative case studies to aid clinical diagnoses, educate medical trainees, and support medical research. However, general image classification and retrieval approaches fail in grouping medical images from the physicians\u27 viewpoint. This is because fully-automated learning techniques cannot yet bridge the gap between image features and domain-specific content for the absence of expert knowledge. Understanding how experts get information from medical images is therefore an important research topic. As a prior study, we conducted data elicitation experiments, where physicians were instructed to inspect each medical image towards a diagnosis while describing image content to a student seated nearby. Experts\u27 eye movements and their verbal descriptions of the image content were recorded to capture various aspects of expert image understanding. This dissertation aims at an intuitive approach to extracting expert knowledge, which is to find patterns in expert data elicited from image-based diagnoses. These patterns are useful to understand both the characteristics of the medical images and the experts\u27 cognitive reasoning processes. The transformation from the viewed raw image features to interpretation as domain-specific concepts requires experts\u27 domain knowledge and cognitive reasoning. This dissertation also approximates this transformation using a matrix factorization-based framework, which helps project multiple expert-derived data modalities to high-level abstractions. To combine additional expert interventions with computational processing capabilities, an interactive machine learning paradigm is developed to treat experts as an integral part of the learning process. Specifically, experts refine medical image groups presented by the learned model locally, to incrementally re-learn the model globally. This paradigm avoids the onerous expert annotations for model training, while aligning the learned model with experts\u27 sense-making
    • …
    corecore