411 research outputs found
Deep Learning Approaches for Data Augmentation in Medical Imaging: A Review
Deep learning has become a popular tool for medical image analysis, but the
limited availability of training data remains a major challenge, particularly
in the medical field where data acquisition can be costly and subject to
privacy regulations. Data augmentation techniques offer a solution by
artificially increasing the number of training samples, but these techniques
often produce limited and unconvincing results. To address this issue, a
growing number of studies have proposed the use of deep generative models to
generate more realistic and diverse data that conform to the true distribution
of the data. In this review, we focus on three types of deep generative models
for medical image augmentation: variational autoencoders, generative
adversarial networks, and diffusion models. We provide an overview of the
current state of the art in each of these models and discuss their potential
for use in different downstream tasks in medical imaging, including
classification, segmentation, and cross-modal translation. We also evaluate the
strengths and limitations of each model and suggest directions for future
research in this field. Our goal is to provide a comprehensive review about the
use of deep generative models for medical image augmentation and to highlight
the potential of these models for improving the performance of deep learning
algorithms in medical image analysis
Development of Quantitative Bone SPECT Analysis Methods for Metastatic Bone Disease
Prostate cancer is one of the most prevalent types of cancer in males in the United States. Bone is a common site of metastases for metastatic prostate cancer. However, bone metastases are often considered “unmeasurable” using standard anatomic imaging and the RECIST 1.1 criteria. As a result, response to therapy is often suboptimally evaluated by visual interpretation of planar bone scintigraphy with response criteria related to the presence or absence of new lesions. With the commercial availability of quantitative single-photon emission computed tomography (SPECT) methods, it is now feasible to establish quantitative metrics of therapy response by skeletal metastases. Quantitative bone SPECT (QBSPECT) may provide the ability to estimate bone lesion uptake, volume, and the number of lesions more accurately than planar imaging. However, the accuracy of activity quantification in QBSPECT relies heavily on the precision with which bone metastases and bone structures are delineated. In this research, we aim at developing automated image segmentation methods for fast and accurate delineation of bone and bone metastases in QBSPECT. To begin, we developed registration methods to generate a dataset of realistic and anatomically-varying computerized phantoms for use in QBSPECT simulations. Using these simulations, we develop supervised computer-automated segmentation methods to minimize intra- and inter-observer variations in delineating bone metastases. This project provides accurate segmentation techniques for QBSPECT and paves the way for the development of QBSPECT methods for assessing bone metastases’ therapy response
Segment Anything Model for Medical Images?
The Segment Anything Model (SAM) is the first foundation model for general
image segmentation. It designed a novel promotable segmentation task, ensuring
zero-shot image segmentation using the pre-trained model via two main modes
including automatic everything and manual prompt. SAM has achieved impressive
results on various natural image segmentation tasks. However, medical image
segmentation (MIS) is more challenging due to the complex modalities, fine
anatomical structures, uncertain and complex object boundaries, and wide-range
object scales. SAM has achieved impressive results on various natural image
segmentation tasks. Meanwhile, zero-shot and efficient MIS can well reduce the
annotation time and boost the development of medical image analysis. Hence, SAM
seems to be a potential tool and its performance on large medical datasets
should be further validated. We collected and sorted 52 open-source datasets,
and build a large medical segmentation dataset with 16 modalities, 68 objects,
and 553K slices. We conducted a comprehensive analysis of different SAM testing
strategies on the so-called COSMOS 553K dataset. Extensive experiments validate
that SAM performs better with manual hints like points and boxes for object
perception in medical images, leading to better performance in prompt mode
compared to everything mode. Additionally, SAM shows remarkable performance in
some specific objects and modalities, but is imperfect or even totally fails in
other situations. Finally, we analyze the influence of different factors (e.g.,
the Fourier-based boundary complexity and size of the segmented objects) on
SAM's segmentation performance. Extensive experiments validate that SAM's
zero-shot segmentation capability is not sufficient to ensure its direct
application to the MIS.Comment: 23 pages, 14 figures, 12 table
Proceedings of the 8th Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE 2023)
This volume gathers the papers presented at the Detection and Classification of Acoustic Scenes and Events 2023 Workshop (DCASE2023), Tampere, Finland, during 21–22 September 2023
Cultivate Quantitative Magnetic Resonance Imaging Methods to Measure Markers of Health and Translate to Large Scale Cohort Studies
Magnetic Resonance Imaging (MRI) is an indispensable tool in healthcare and research, with a growing demand for its services. The appeal of MRI stems from its non-ionizing radiation nature, ability to generate high-resolution images of internal organs and structures without invasive procedures, and capacity to provide quantitative assessments of tissue properties such as ectopic fat, body composition, and organ volume. All without long term side effects. Nine published papers are submitted which show the cultivation of quantitative measures of ectopic fat within the liver and pancreas using MRI, and the process of validating whole-body composition and organ volume measurements. All these techniques have been translated into large-scale studies to improve health measurements in large population cohorts. Translating this work into large-scale studies, including the use of artificial intelligence, is included. Additionally, an evaluation accompanies these published studies, appraising the evolution of these quantitative MRI techniques from the conception to their application in large cohort studies. Finally, this appraisal provides a summary of future work on crowdsourcing of ground truth training data to facilitate its use in wider applications of artificial intelligence.In conclusion, this body of work presents a portfolio of evidence to fulfil the requirements of a PhD by published works at the University of Salford
BRONCO: Automated modelling of the bronchovascular bundle using the Computed Tomography Images
Segmentation of the bronchovascular bundle within the lung parenchyma is a
key step for the proper analysis and planning of many pulmonary diseases. It
might also be considered the preprocessing step when the goal is to segment the
nodules from the lung parenchyma. We propose a segmentation pipeline for the
bronchovascular bundle based on the Computed Tomography images, returning
either binary or labelled masks of vessels and bronchi situated in the lung
parenchyma. The method consists of two modules, modeling of the bronchial tree
and vessels. The core revolves around a similar pipeline, the determination of
the initial perimeter by the GMM method, skeletonization, and hierarchical
analysis of the created graph. We tested our method on both low-dose CT and
standard-dose CT, with various pathologies, reconstructed with various slice
thicknesses, and acquired from various machines. We conclude that the method is
invariant with respect to the origin and parameters of the CT series. Our
pipeline is best suited for studies with healthy patients, patients with lung
nodules, and patients with emphysema
Deep Learning based Novel Anomaly Detection Methods for Diabetic Retinopathy Screening
Programa Oficial de Doutoramento en ComputaciĂłn. 5009V01[Abstract] Computer-Aided Screening (CAS) systems are getting popularity in disease diagnosis. Modern CAS systems exploit data driven machine learning algorithms including supervised and unsupervised methods.
In medical imaging, annotating pathological samples are much harder and time consuming work than healthy samples. Therefore, there is always an abundance of healthy samples and scarcity of annotated and labelled pathological samples. Unsupervised anomaly detection algorithms
can be implemented for the development of CAS system using the largely available healthy samples, especially when disease/nodisease decision is important for screening.
This thesis proposes unsupervised machine learning methodologies for anomaly detection in retinal fundus images. A novel patchbased image reconstructor architecture for DR detection is presented, that addresses the shortcomings of standard autoencoders-based reconstructors.
Furthermore, a full-size image based anomaly map generation methodology is presented, where the potential DR lesions can be visualized at the pixel-level. Afterwards, a novel methodology is proposed to extend the patch-based architecture to a fully-convolutional
architecture for one-shot full-size image reconstruction. Finally, a novel methodology for supervised DR classification is proposed that utilizes the anomaly maps
A proposed methodology for detecting the malignant potential of pulmonary nodules in sarcoma using computed tomographic imaging and artificial intelligence-based models
The presence of lung metastases in patients with primary malignancies is an important criterion for treatment management and prognostication. Computed tomography (CT) of the chest is the preferred method to detect lung metastasis. However, CT has limited efficacy in differentiating metastatic nodules from benign nodules (e.g., granulomas due to tuberculosis) especially at early stages (<5 mm). There is also a significant subjectivity associated in making this distinction, leading to frequent CT follow-ups and additional radiation exposure along with financial and emotional burden to the patients and family. Even 18F-fluoro-deoxyglucose positron emission technology-computed tomography (18F-FDG PET-CT) is not always confirmatory for this clinical problem. While pathological biopsy is the gold standard to demonstrate malignancy, invasive sampling of small lung nodules is often not clinically feasible. Currently, there is no non-invasive imaging technique that can reliably characterize lung metastases. The lung is one of the favored sites of metastasis in sarcomas. Hence, patients with sarcomas, especially from tuberculosis prevalent developing countries, can provide an ideal platform to develop a model to differentiate lung metastases from benign nodules. To overcome the lack of optimal specificity of CT scan in detecting pulmonary metastasis, a novel artificial intelligence (AI)-based protocol is proposed utilizing a combination of radiological and clinical biomarkers to identify lung nodules and characterize it as benign or metastasis. This protocol includes a retrospective cohort of nearly 2,000–2,250 sample nodules (from at least 450 patients) for training and testing and an ambispective cohort of nearly 500 nodules (from 100 patients; 50 patients each from the retrospective and prospective cohort) for validation. Ground-truth annotation of lung nodules will be performed using an in-house-built segmentation tool. Ground-truth labeling of lung nodules (metastatic/benign) will be performed based on histopathological results or baseline and/or follow-up radiological findings along with clinical outcome of the patient. Optimal methods for data handling and statistical analysis are included to develop a robust protocol for early detection and classification of pulmonary metastasis at baseline and at follow-up and identification of associated potential clinical and radiological markers
- …