180 research outputs found
Robust semi-automated path extraction for visualising stenosis of the coronary arteries
Computed tomography angiography (CTA) is useful for diagnosing and planning treatment of heart disease. However, contrast agent in surrounding structures (such as the aorta and left ventricle) makes 3-D visualisation of the coronary arteries difficult. This paper presents a composite method employing segmentation and volume rendering to overcome this issue. A key contribution is a novel Fast Marching minimal path cost function for vessel centreline extraction. The resultant centreline is used to compute a measure of vessel lumen, which indicates the degree of stenosis (narrowing of a vessel). Two volume visualisation techniques are presented which utilise the segmented arteries and lumen measure. The system is evaluated and demonstrated using synthetic and clinically obtained datasets
Automated quantification and evaluation of motion artifact on coronary CT angiography images
Abstract Purpose
This study developed and validated a Motion Artifact Quantification algorithm to automatically quantify the severity of motion artifacts on coronary computed tomography angiography (CCTA) images. The algorithm was then used to develop a Motion IQ Decision method to automatically identify whether a CCTA dataset is of sufficient diagnostic image quality or requires further correction. Method
The developed Motion Artifact Quantification algorithm includes steps to identify the right coronary artery (RCA) regions of interest (ROIs), segment vessel and shading artifacts, and to calculate the motion artifact score (MAS) metric. The segmentation algorithms were verified against groundâtruth manual segmentations. The segmentation algorithms were also verified by comparing and analyzing the MAS calculated from groundâtruth segmentations and the algorithmâgenerated segmentations. The Motion IQ Decision algorithm first identifies slices with unsatisfactory image quality using a MAS threshold. The algorithm then uses an artifactâlength threshold to determine whether the degraded vessel segment is large enough to cause the dataset to be nondiagnostic. An observer study on 30 clinical CCTA datasets was performed to obtain the groundâtruth decisions of whether the datasets were of sufficient image quality. A fiveâfold crossâvalidation was used to identify the thresholds and to evaluate the Motion IQ Decision algorithm. Results
The automated segmentation algorithms in the Motion Artifact Quantification algorithm resulted in Dice coefficients of 0.84 for the segmented vessel regions and 0.75 for the segmented shading artifact regions. The MAS calculated using the automated algorithm was within 10% of the values obtained using groundâtruth segmentations. The MAS threshold and artifactâlength thresholds were determined by the ROC analysis to be 0.6 and 6.25 mm by all folds. The Motion IQ Decision algorithm demonstrated 100% sensitivity, 66.7% ± 27.9% specificity, and a total accuracy of 86.7% ± 12.5% for identifying datasets in which the RCA required correction. The Motion IQ Decision algorithm demonstrated 91.3% sensitivity, 71.4% specificity, and a total accuracy of 86.7% for identifying CCTA datasets that need correction for any of the three main vessels. Conclusion
The Motion Artifact Quantification algorithm calculated accurate
Comparative evaluation of instrument segmentation and tracking methods in minimally invasive surgery
Intraoperative segmentation and tracking of minimally invasive instruments is
a prerequisite for computer- and robotic-assisted surgery. Since additional
hardware like tracking systems or the robot encoders are cumbersome and lack
accuracy, surgical vision is evolving as promising techniques to segment and
track the instruments using only the endoscopic images. However, what is
missing so far are common image data sets for consistent evaluation and
benchmarking of algorithms against each other. The paper presents a comparative
validation study of different vision-based methods for instrument segmentation
and tracking in the context of robotic as well as conventional laparoscopic
surgery. The contribution of the paper is twofold: we introduce a comprehensive
validation data set that was provided to the study participants and present the
results of the comparative validation study. Based on the results of the
validation study, we arrive at the conclusion that modern deep learning
approaches outperform other methods in instrument segmentation tasks, but the
results are still not perfect. Furthermore, we show that merging results from
different methods actually significantly increases accuracy in comparison to
the best stand-alone method. On the other hand, the results of the instrument
tracking task show that this is still an open challenge, especially during
challenging scenarios in conventional laparoscopic surgery
Evaluation of Motion Artifact Metrics for Coronary CT Angiography
Purpose
This study quantified the performance of coronary artery motion artifact metrics relative to human observer ratings. Motion artifact metrics have been used as part of motion correction and bestâphase selection algorithms for Coronary Computed Tomography Angiography (CCTA). However, the lack of ground truth makes it difficult to validate how well the metrics quantify the level of motion artifact. This study investigated five motion artifact metrics, including two novel metrics, using a dynamic phantom, clinical CCTA images, and an observer study that provided groundâtruth motion artifact scores from a series of pairwise comparisons. Method
Five motion artifact metrics were calculated for the coronary artery regions on both phantom and clinical CCTA images: positivity, entropy, normalized circularity, Fold Overlap Ratio (FOR), and LowâIntensity Region Score (LIRS). CT images were acquired of a dynamic cardiac phantom that simulated cardiac motion and contained six iodineâfilled vessels of varying diameter and with regions of soft plaque and calcifications. Scans were repeated with different gantry start angles. Images were reconstructed at five phases of the motion cycle. Clinical images were acquired from 14 CCTA exams with patient heart rates ranging from 52 to 82 bpm. The vessel and shading artifacts were manually segmented by three readers and combined to create groundâtruth artifact regions. Motion artifact levels were also assessed by readers using a pairwise comparison method to establish a groundâtruth reader score. The Kendall\u27s Tau coefficients were calculated to evaluate the statistical agreement in ranking between the motion artifacts metrics and reader scores. Linear regression between the reader scores and the metrics was also performed. Results
On phantom images, the Kendall\u27s Tau coefficients of the five motion artifact metrics were 0.50 (normalized circularity), 0.35 (entropy), 0.82 (positivity), 0.77 (FOR), 0.77(LIRS), where higher Kendall\u27s Tau signifies higher agreement. The FOR, LIRS, and transformed positivity (the fourth root of the positivity) were further evaluated in the study of clinical images. The Kendall\u27s Tau coefficients of the selected metrics were 0.59 (FOR), 0.53 (LIRS), and 0.21 (Transformed positivity). In the study of clinical data, a Motion Artifact Score, defined as the product of FOR and LIRS metrics, further improved agreement with reader scores, with a Kendall\u27s Tau coefficient of 0.65. Conclusion
The metrics of FOR, LIRS, and the product of the two metrics provided the highest agreement in motion artifact ranking when compared to the readers, and the highest linear correlation to the reader scores. The validated motion artifact metrics may be useful for developing and evaluating methods to reduce motion in Coronary Computed Tomography Angiography (CCTA) images
Segmenting white matter hyperintensities on isotropic three-dimensional Fluid Attenuated Inversion Recovery magnetic resonance images: Assessing deep learning tools on a Norwegian imaging database
An important step in the analysis of magnetic resonance imaging (MRI) data for neuroimaging is the automated segmentation of white matter hyperintensities (WMHs). Fluid Attenuated Inversion Recovery (FLAIR-weighted) is an MRI contrast that is particularly useful to visualize and quantify WMHs, a hallmark of cerebral small vessel disease and Alzheimer's disease (AD). In order to achieve high spatial resolution in each of the three voxel dimensions, clinical MRI protocols are evolving to a three-dimensional (3D) FLAIR-weighted acquisition. The current study details the deployment of deep learning tools to enable automated WMH segmentation and characterization from 3D FLAIR-weighted images acquired as part of a national AD imaging initiative. Based on data from the ongoing Norwegian Disease Dementia Initiation (DDI) multicenter study, two 3D models-one off-the-shelf from the NVIDIA nnU-Net framework and the other internally developed-were trained, validated, and tested. A third cutting-edge Deep Bayesian network model (HyperMapp3r) was implemented without any de-novo tuning to serve as a comparison architecture. The 2.5D in-house developed and 3D nnU-Net models were trained and validated in-house across five national collection sites among 441 participants from the DDI study, of whom 194 were men and whose average age was (64.91 +/- 9.32) years. Both an external dataset with 29 cases from a global collaborator and a held-out subset of the internal data from the 441 participants were used to test all three models. These test sets were evaluated independently. The ground truth human-in-the-loop segmentation was compared against five established WMH performance metrics. The 3D nnU-Net had the highest performance out of the three tested networks, outperforming both the internally developed 2.5D model and the SOTA Deep Bayesian network with an average dice similarity coefficient score of 0.76 +/- 0.16. Our findings demonstrate that WMH segmentation models can achieve high performance when trained exclusively on FLAIR input volumes that are 3D volumetric acquisitions. Single image input models are desirable for ease of deployment, as reflected in the current embedded clinical research project. The 3D nnU-Net had the highest performance, which suggests a way forward for our need to automate WMH segmentation while also evaluating performance metrics during on-going data collection and model retraining
Multi-branch Convolutional Neural Network for Multiple Sclerosis Lesion Segmentation
In this paper, we present an automated approach for segmenting multiple
sclerosis (MS) lesions from multi-modal brain magnetic resonance images. Our
method is based on a deep end-to-end 2D convolutional neural network (CNN) for
slice-based segmentation of 3D volumetric data. The proposed CNN includes a
multi-branch downsampling path, which enables the network to encode information
from multiple modalities separately. Multi-scale feature fusion blocks are
proposed to combine feature maps from different modalities at different stages
of the network. Then, multi-scale feature upsampling blocks are introduced to
upsize combined feature maps to leverage information from lesion shape and
location. We trained and tested the proposed model using orthogonal plane
orientations of each 3D modality to exploit the contextual information in all
directions. The proposed pipeline is evaluated on two different datasets: a
private dataset including 37 MS patients and a publicly available dataset known
as the ISBI 2015 longitudinal MS lesion segmentation challenge dataset,
consisting of 14 MS patients. Considering the ISBI challenge, at the time of
submission, our method was amongst the top performing solutions. On the private
dataset, using the same array of performance metrics as in the ISBI challenge,
the proposed approach shows high improvements in MS lesion segmentation
compared with other publicly available tools.Comment: This paper has been accepted for publication in NeuroImag
Incorporating Cardiac Substructures Into Radiation Therapy For Improved Cardiac Sparing
Growing evidence suggests that radiation therapy (RT) doses to the heart and cardiac substructures (CS) are strongly linked to cardiac toxicities, though only the heart is considered clinically. This work aimed to utilize the superior soft-tissue contrast of magnetic resonance (MR) to segment CS, quantify uncertainties in their position, assess their effect on treatment planning and an MR-guided environment.
Automatic substructure segmentation of 12 CS was completed using a novel hybrid MR/computed tomography (CT) atlas method and was improved upon using a 3-dimensional neural network (U-Net) from deep learning. Intra-fraction motion due to respiration was then quantified. The inter-fraction setup uncertainties utilizing a novel MR-linear accelerator were also quantified. Treatment planning comparisons were performed with and without substructure inclusions and methods to reduce radiation dose to sensitive CS were evaluated. Lastly, these described technologies (deep learning U-Net) were translated to an MR-linear accelerator and a segmentation pipeline was created.
Automatic segmentations from the hybrid MR/CT atlas was able to generate accurate segmentations for the chambers and great vessels (Dice similarity coefficient (DSC) \u3e 0.75) but coronary artery segmentations were unsuccessful (DSC\u3c0.3). After implementing deep learning, DSC for the chambers and great vessels was â„0.85 along with an improvement in the coronary arteries (DSC\u3e0.5). Similar accuracy was achieved when implementing deep learning for MR-guided RT. On average, automatic segmentations required ~10 minutes to generate per patient and deep learning only required 14 seconds. The inclusion of CS in the treatment planning process did not yield statistically significant changes in plan complexity, PTV, or OAR dose.
Automatic segmentation results from deep learning pose major efficiency and accuracy gains for CS segmentation offering high potential for rapid implementation into radiation therapy planning for improved cardiac sparing. Introducing CS into RT planning for MR-guided RT presented an opportunity for more effective sparing with limited increase in plan complexity
- âŠ