1,084 research outputs found
DeepBrainPrint: A Novel Contrastive Framework for Brain MRI Re-Identification
Recent advances in MRI have led to the creation of large datasets. With the
increase in data volume, it has become difficult to locate previous scans of
the same patient within these datasets (a process known as re-identification).
To address this issue, we propose an AI-powered medical imaging retrieval
framework called DeepBrainPrint, which is designed to retrieve brain MRI scans
of the same patient. Our framework is a semi-self-supervised contrastive deep
learning approach with three main innovations. First, we use a combination of
self-supervised and supervised paradigms to create an effective brain
fingerprint from MRI scans that can be used for real-time image retrieval.
Second, we use a special weighting function to guide the training and improve
model convergence. Third, we introduce new imaging transformations to improve
retrieval robustness in the presence of intensity variations (i.e. different
scan contrasts), and to account for age and disease progression in patients. We
tested DeepBrainPrint on a large dataset of T1-weighted brain MRIs from the
Alzheimer's Disease Neuroimaging Initiative (ADNI) and on a synthetic dataset
designed to evaluate retrieval performance with different image modalities. Our
results show that DeepBrainPrint outperforms previous methods, including simple
similarity metrics and more advanced contrastive deep learning frameworks
Machine Intelligence for Advanced Medical Data Analysis: Manifold Learning Approach
In the current work, linear and non-linear manifold learning techniques, specifically Principle Component Analysis (PCA) and Laplacian Eigenmaps, are studied in detail. Their applications in medical image and shape analysis are investigated.
In the first contribution, a manifold learning-based multi-modal image registration technique is developed, which results in a unified intensity system through intensity transformation between the reference and sensed images. The transformation eliminates intensity variations in multi-modal medical scans and hence facilitates employing well-studied mono-modal registration techniques. The method can be used for registering multi-modal images with full and partial data.
Next, a manifold learning-based scale invariant global shape descriptor is introduced. The proposed descriptor benefits from the capability of Laplacian Eigenmap in dealing with high dimensional data by introducing an exponential weighting scheme. It eliminates the limitations tied to the well-known cotangent weighting scheme, namely dependency on triangular mesh representation and high intra-class quality of 3D models.
In the end, a novel descriptive model for diagnostic classification of pulmonary nodules is presented. The descriptive model benefits from structural differences between benign and malignant nodules for automatic and accurate prediction of a candidate nodule. It extracts concise and discriminative features automatically from the 3D surface structure of a nodule using spectral features studied in the previous work combined with a point cloud-based deep learning network.
Extensive experiments have been conducted and have shown that the proposed algorithms based on manifold learning outperform several state-of-the-art methods. Advanced computational techniques with a combination of manifold learning and deep networks can play a vital role in effective healthcare delivery by providing a framework for several fundamental tasks in image and shape processing, namely, registration, classification, and detection of features of interest
Medical image retrieval for augmenting diagnostic radiology
Even though the use of medical imaging to diagnose patients is ubiquitous in clinical settings, their interpretations are still challenging for radiologists. Many factors make this interpretation task difficult, one of which is that medical images sometimes present subtle clues yet are crucial for diagnosis. Even worse, on the other hand, similar clues could indicate multiple diseases, making it challenging to figure out the definitive diagnoses. To help radiologists quickly and accurately interpret medical images, there is a need for a tool that can augment their diagnostic procedures and increase efficiency in their daily workflow. A general-purpose medical image retrieval system can be such a
tool as it allows them to search and retrieve similar cases that are already diagnosed to make comparative analyses that would complement their diagnostic decisions. In this thesis, we contribute to developing such a system by proposing approaches to be integrated as modules of a single system, enabling it to handle various information needs of radiologists and thus augment their diagnostic processes during the interpretation of medical images.
We have mainly studied the following retrieval approaches to handle radiologists’different information needs; i) Retrieval Based on Contents, ii) Retrieval Based on Contents, Patients’ Demographics, and Disease Predictions, and iii) Retrieval Based on Contents and Radiologists’ Text Descriptions. For the first study, we aimed to find an effective feature representation method to distinguish medical images considering their semantics and modalities. To do that, we have experimented different representation techniques based on handcrafted methods (mainly texture features) and deep learning (deep features). Based on the experimental results, we propose an effective feature representation approach and deep learning architectures for learning and extracting medical image contents. For the second study, we present a multi-faceted method that complements image contents with patients’ demographics and deep learning-based disease predictions, making it able to identify similar cases accurately considering the clinical context the radiologists seek.
For the last study, we propose a guided search method that integrates an image with a radiologist’s text description to guide the retrieval process. This method guarantees that the retrieved images are suitable for the comparative analysis to confirm or rule
out initial diagnoses (the differential diagnosis procedure). Furthermore, our method is based on a deep metric learning technique and is better than traditional content-based approaches that rely on only image features and, thus, sometimes retrieve insignificant random images
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Content-Based Medical Image Retrieval with Opponent Class Adaptive Margin Loss
Broadspread use of medical imaging devices with digital storage has paved the
way for curation of substantial data repositories. Fast access to image samples
with similar appearance to suspected cases can help establish a consulting
system for healthcare professionals, and improve diagnostic procedures while
minimizing processing delays. However, manual querying of large data
repositories is labor intensive. Content-based image retrieval (CBIR) offers an
automated solution based on dense embedding vectors that represent image
features to allow quantitative similarity assessments. Triplet learning has
emerged as a powerful approach to recover embeddings in CBIR, albeit
traditional loss functions ignore the dynamic relationship between opponent
image classes. Here, we introduce a triplet-learning method for automated
querying of medical image repositories based on a novel Opponent Class Adaptive
Margin (OCAM) loss. OCAM uses a variable margin value that is updated
continually during the course of training to maintain optimally discriminative
representations. CBIR performance of OCAM is compared against state-of-the-art
loss functions for representational learning on three public databases
(gastrointestinal disease, skin lesion, lung disease). Comprehensive
experiments in each application domain demonstrate the superior performance of
OCAM against baselines.Comment: 10 pages, 6 figure
Medical Image Retrieval Using Pretrained Embeddings
A wide range of imaging techniques and data formats available for medical
images make accurate retrieval from image databases challenging.
Efficient retrieval systems are crucial in advancing medical research,
enabling large-scale studies and innovative diagnostic tools. Thus, addressing
the challenges of medical image retrieval is essential for the continued
enhancement of healthcare and research.
In this study, we evaluated the feasibility of employing four
state-of-the-art pretrained models for medical image retrieval at modality,
body region, and organ levels and compared the results of two similarity
indexing approaches. Since the employed networks take 2D images, we analyzed
the impacts of weighting and sampling strategies to incorporate 3D information
during retrieval of 3D volumes. We showed that medical image retrieval is
feasible using pretrained networks without any additional training or
fine-tuning steps. Using pretrained embeddings, we achieved a recall of 1 for
various tasks at modality, body region, and organ level.Comment: 8 pages, 3 figures, 4 table
- …