16 research outputs found
USE-Net: Incorporating Squeeze-and-Excitation blocks into U-Net for prostate zonal segmentation of multi-institutional MRI datasets
Prostate cancer is the most common malignant tumors in men but prostate
Magnetic Resonance Imaging (MRI) analysis remains challenging. Besides whole
prostate gland segmentation, the capability to differentiate between the blurry
boundary of the Central Gland (CG) and Peripheral Zone (PZ) can lead to
differential diagnosis, since tumor's frequency and severity differ in these
regions. To tackle the prostate zonal segmentation task, we propose a novel
Convolutional Neural Network (CNN), called USE-Net, which incorporates
Squeeze-and-Excitation (SE) blocks into U-Net. Especially, the SE blocks are
added after every Encoder (Enc USE-Net) or Encoder-Decoder block (Enc-Dec
USE-Net). This study evaluates the generalization ability of CNN-based
architectures on three T2-weighted MRI datasets, each one consisting of a
different number of patients and heterogeneous image characteristics, collected
by different institutions. The following mixed scheme is used for
training/testing: (i) training on either each individual dataset or multiple
prostate MRI datasets and (ii) testing on all three datasets with all possible
training/testing combinations. USE-Net is compared against three
state-of-the-art CNN-based architectures (i.e., U-Net, pix2pix, and Mixed-Scale
Dense Network), along with a semi-automatic continuous max-flow model. The
results show that training on the union of the datasets generally outperforms
training on each dataset separately, allowing for both intra-/cross-dataset
generalization. Enc USE-Net shows good overall generalization under any
training condition, while Enc-Dec USE-Net remarkably outperforms the other
methods when trained on all datasets. These findings reveal that the SE blocks'
adaptive feature recalibration provides excellent cross-dataset generalization
when testing is performed on samples of the datasets used during training.Comment: 44 pages, 6 figures, Accepted to Neurocomputing, Co-first authors:
Leonardo Rundo and Changhee Ha
Applications of artificial intelligence to prostate multiparametric MRI (mpMRI): Current and emerging trends
Prostate carcinoma is one of the most prevalent cancers worldwide. Multiparametric magnetic resonance imaging (mpMRI) is a non-invasive tool that can improve prostate lesion detection, classification, and volume quantification. Machine learning (ML), a branch of artificial intelligence, can rapidly and accurately analyze mpMRI images. ML could provide better standardization and consistency in identifying prostate lesions and enhance prostate carcinoma management. This review summarizes ML applications to prostate mpMRI and focuses on prostate organ segmentation, lesion detection and segmentation, and lesion characterization. A literature search was conducted to find studies that have applied ML methods to prostate mpMRI. To date, prostate organ segmentation and volume approximation have been well executed using various ML techniques. Prostate lesion detection and segmentation are much more challenging tasks for ML and were attempted in several studies. They largely remain unsolved problems due to data scarcity and the limitations of current ML algorithms. By contrast, prostate lesion characterization has been successfully completed in several studies because of better data availability. Overall, ML is well situated to become a tool that enhances radiologists\u27 accuracy and speed
Automatic Extra-Axial Cerebrospinal Fluid
Automatic Extra-Axial Cerebrospinal Fluid (Auto EACSF) is an open-source, interactive tool for automatic computation of brain extra-axial cerebrospinal fluid (EA-CSF) in magnetic resonance image (MRI) scans of infants. Elevated extra-axial fluid volume is a possible biomarker for Autism Spectrum Disorder (ASD). Auto EACSF aims to automatically calculate the volume of EA-CSF and could therefore be used for early diagnosis of Autism. Auto EACSF is a user-friendly application that generates a Qt application to calculate the volume of EA-CSF. The application is run through a GUI, but also provides an advanced use mode that allows execution of different steps by themselves via Python and XML scripts.Bachelor of Scienc
Анализ подходов к глубокому обучению для автоматизированного выделения и сегментации предстательной железы: обзор литературы
Background. Delineation of the prostate boundaries represents the initial step in understanding the state of the whole organ and is mainly manually performed, which takes a long time and directly depends on the experience of the radiologists. Automated prostate selection can be carried out by various approaches, including using artificial intelligence and its subdisciplines – machine and deep learning.Aim. To reveal the most accurate deep learning-based methods for prostate segmentation on multiparametric magnetic resonance images.Materials and methods. The search was conducted in July 2022 in the PubMed database with a special clinical query (((AI) OR (machine learning)) OR (deep learning)) AND (prostate) AND (MRI). The inclusion criteria were availability of the full article, publication date no more than five years prior to the time of the search, availability of a quantitative assessment of the reconstruction accuracy by the Dice similarity coefficient (DSC) calculation.Results. The search returned 521 articles, but only 24 papers including descriptions of 33 different deep learning networks for prostate segmentation were selected for the final review. The median number of cases included for artificial intelligence training was 100 with a range from 25 to 365. The optimal DSC value threshold (0.9), in which automated segmentation is only slightly inferior to manual delineation, was achieved in 21 studies.Conclusion. Despite significant achievements in the development of deep learning-based prostate segmentation algorithms, there are still problems and limitations that should be resolved before artificial intelligence can be implemented in clinical practice.Введение. Определение границ предстательной железы является начальным шагом в понимании состояния органа и в основном выполняется вручную, что занимает длительное время и напрямую зависит от опыта рентгенолога. Автоматизация в выделении предстательной железы может быть осуществлена различными подходами, в том числе с помощью искусственного интеллекта и его субдисциплин – машинного и глубокого обучения.Цель работы – детальный анализ литературы для определения наиболее эффективных способов автоматизированной сегментации предстательной железы по снимкам мультипараметрической магнитно-резонансной томографии посредством глубокого обучения.Материалы и методы. Поиск публикаций проводился в июле 2022 г. в поисковой системе PubMed с помощью клинического запроса (((AI) OR (machine learning)) OR (deep learning)) AND (prostate) AND (MRI). Критериями включения были доступность полного текста статьи, дата публикации не более 5 лет на момент поиска, наличие количественной оценки точности реконструкции предстательной железы с помощью коэффициента Серенсена–Дайса (Dice similarity coefficient, DSC).Результаты. В результате поиска найдена 521 статья, из которой в анализ были включены только 24 работы, содержавшие описание 33 различных способов глубокого обучения для сегментации предстательной железы. Медиана количества исследований, включенных для обучения искусственного интеллекта, составила 100 с диапазоном от 25 до 365. Оптимальным значением DSC, при котором автоматизированная сегментация лишь незначительно уступает ручному послойному выделению предстательной железы, составляет 0,9. Так, DSC выше порогового достигнут в описании 21 алгоритма.Заключение. Несмотря на значимые достижения в автоматизированной сегментации предстательной железы с помощью алгоритмов глубокого обучения, до сих пор существует ряд проблем и ограничений, требующих решения для внедрения искусственного интеллекта в клиническую практику
Recommended from our members
On unsupervised methods for medical image segmentation: Investigating classic approaches in breast cancer dce-mri
Unsupervised segmentation techniques, which do not require labeled data for training and can be more easily integrated into the clinical routine, represent a valid solution especially from a clinical feasibility perspective. Indeed, large-scale annotated datasets are not always available, undermining their immediate implementation and use in the clinic. Breast cancer is the most common cause of cancer death in women worldwide. In this study, breast lesion delineation in Dynamic Contrast Enhanced MRI (DCE-MRI) series was addressed by means of four popular unsupervised segmentation approaches: Split-and-Merge combined with Region Growing (SMRG), k-means, Fuzzy C-Means (FCM), and spatial FCM (sFCM). They represent well-established pattern recognition techniques that are still widely used in clinical research. Starting from the basic versions of these segmentation approaches, during our analysis, we identified the shortcomings of each of them, proposing improved versions, as well as developing ad hoc pre- and post-processing steps. The obtained experimental results, in terms of area-based—namely, Dice Index (DI), Jaccard Index (JI), Sensitivity, Specificity, False Positive Ratio (FPR), False Negative Ratio (FNR)—and distance-based metrics—Mean Absolute Distance (MAD), Maximum Distance (MaxD), Hausdorff Distance (HD)—encourage the use of unsupervised machine learning techniques in medical image segmentation. In particular, fuzzy clustering approaches (namely, FCM and sFCM) achieved the best performance. In fact, for area-based metrics, they obtained DI = 78.23% ± 6.50 (sFCM), JI = 65.90% ± 8.14 (sFCM), sensitivity = 77.84% ± 8.72 (FCM), specificity = 87.10% ± 8.24 (sFCM), FPR = 0.14 ± 0.12 (sFCM), and FNR = 0.22 ± 0.09 (sFCM). Concerning distance-based metrics, they obtained MAD = 1.37 ± 0.90 (sFCM), MaxD = 4.04 ± 2.87 (sFCM), and HD = 2.21 ± 0.43 (FCM). These experimental findings suggest that further research would be useful for advanced fuzzy logic techniques specifically tailored to medical image segmentation.</jats:p
Biomedical Image Processing and Classification
Biomedical image processing is an interdisciplinary field involving a variety of disciplines, e.g., electronics, computer science, physics, mathematics, physiology, and medicine. Several imaging techniques have been developed, providing many approaches to the study of the human body. Biomedical image processing is finding an increasing number of important applications in, for example, the study of the internal structure or function of an organ and the diagnosis or treatment of a disease. If associated with classification methods, it can support the development of computer-aided diagnosis (CAD) systems, which could help medical doctors in refining their clinical picture
Development of registration methods for cardiovascular anatomy and function using advanced 3T MRI, 320-slice CT and PET imaging
Different medical imaging modalities provide complementary anatomical and
functional information. One increasingly important use of such information is in
the clinical management of cardiovascular disease. Multi-modality data is helping
improve diagnosis accuracy, and individualize treatment. The Clinical Research
Imaging Centre at the University of Edinburgh, has been involved in a number
of cardiovascular clinical trials using longitudinal computed tomography (CT) and
multi-parametric magnetic resonance (MR) imaging. The critical image processing
technique that combines the information from all these different datasets is known
as image registration, which is the topic of this thesis. Image registration, especially
multi-modality and multi-parametric registration, remains a challenging field in
medical image analysis. The new registration methods described in this work were
all developed in response to genuine challenges in on-going clinical studies. These
methods have been evaluated using data from these studies.
In order to gain an insight into the building blocks of image registration methods,
the thesis begins with a comprehensive literature review of state-of-the-art algorithms.
This is followed by a description of the first registration method I developed to help
track inflammation in aortic abdominal aneurysms. It registers multi-modality and
multi-parametric images, with new contrast agents. The registration framework uses a
semi-automatically generated region of interest around the aorta. The aorta is aligned
based on a combination of the centres of the regions of interest and intensity matching.
The method achieved sub-voxel accuracy.
The second clinical study involved cardiac data. The first framework failed to
register many of these datasets, because the cardiac data suffers from a common
artefact of magnetic resonance images, namely intensity inhomogeneity. Thus I
developed a new preprocessing technique that is able to correct the artefacts in the
functional data using data from the anatomical scans. The registration framework,
with this preprocessing step and new particle swarm optimizer, achieved significantly
improved registration results on the cardiac data, and was validated quantitatively
using neuro images from a clinical study of neonates. Although on average
the new framework achieved accurate results, when processing data corrupted
by severe artefacts and noise, premature convergence of the optimizer is still a
common problem. To overcome this, I invented a new optimization method, that
achieves more robust convergence by encoding prior knowledge of registration. The
registration results from this new registration-oriented optimizer are more accurate
than other general-purpose particle swarm optimization methods commonly applied
to registration problems.
In summary, this thesis describes a series of novel developments to an image
registration framework, aimed to improve accuracy, robustness and speed. The
resulting registration framework was applied to, and validated by, different types of
images taken from several ongoing clinical trials. In the future, this framework could
be extended to include more diverse transformation models, aided by new machine
learning techniques. It may also be applied to the registration of other types and
modalities of imaging data
The impact of arterial input function determination variations on prostate dynamic contrast-enhanced magnetic resonance imaging pharmacokinetic modeling: a multicenter data analysis challenge, part II
This multicenter study evaluated the effect of variations in arterial input function (AIF) determination on pharmacokinetic (PK) analysis of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data using the shutter-speed model (SSM). Data acquired from eleven prostate cancer patients were shared among nine centers. Each center used a site-specific method to measure the individual AIF from each data set and submitted the results to the managing center. These AIFs, their reference tissue-adjusted variants, and a literature population-averaged AIF, were used by the managing center to perform SSM PK analysis to estimate Ktrans (volume transfer rate constant), ve (extravascular, extracellular volume fraction), kep (efflux rate constant), and τi (mean intracellular water lifetime). All other variables, including the definition of the tumor region of interest and precontrast T1 values, were kept the same to evaluate parameter variations caused by variations in only the AIF. Considerable PK parameter variations were observed with within-subject coefficient of variation (wCV) values of 0.58, 0.27, 0.42, and 0.24 for Ktrans, ve, kep, and τi, respectively, using the unadjusted AIFs. Use of the reference tissue-adjusted AIFs reduced variations in Ktrans and ve (wCV = 0.50 and 0.10, respectively), but had smaller effects on kep and τi (wCV = 0.39 and 0.22, respectively). kep is less sensitive to AIF variation than Ktrans, suggesting it may be a more robust imaging biomarker of prostate microvasculature. With low sensitivity to AIF uncertainty, the SSM-unique τi parameter may have advantages over the conventional PK parameters in a longitudinal study