37,873 research outputs found
Recommended from our members
Automated CT and MRI Liver Segmentation and Biometry Using a Generalized Convolutional Neural Network.
PurposeTo assess feasibility of training a convolutional neural network (CNN) to automate liver segmentation across different imaging modalities and techniques used in clinical practice and apply this to enable automation of liver biometry.MethodsWe trained a 2D U-Net CNN for liver segmentation in two stages using 330 abdominal MRI and CT exams acquired at our institution. First, we trained the neural network with non-contrast multi-echo spoiled-gradient-echo (SGPR)images with 300 MRI exams to provide multiple signal-weightings. Then, we used transfer learning to generalize the CNN with additional images from 30 contrast-enhanced MRI and CT exams.We assessed the performance of the CNN using a distinct multi-institutional data set curated from multiple sources (n = 498 subjects). Segmentation accuracy was evaluated by computing Dice scores. Utilizing these segmentations, we computed liver volume from CT and T1-weighted (T1w) MRI exams, and estimated hepatic proton- density-fat-fraction (PDFF) from multi-echo T2*w MRI exams. We compared quantitative volumetry and PDFF estimates between automated and manual segmentation using Pearson correlation and Bland-Altman statistics.ResultsDice scores were 0.94 ± 0.06 for CT (n = 230), 0.95 ± 0.03 (n = 100) for T1w MR, and 0.92 ± 0.05 for T2*w MR (n = 169). Liver volume measured by manual and automated segmentation agreed closely for CT (95% limit-of-agreement (LoA) = [-298 mL, 180 mL]) and T1w MR (LoA = [-358 mL, 180 mL]). Hepatic PDFF measured by the two segmentations also agreed closely (LoA = [-0.62%, 0.80%]).ConclusionsUtilizing a transfer-learning strategy, we have demonstrated the feasibility of a CNN to be generalized to perform liver segmentations across different imaging techniques and modalities. With further refinement and validation, CNNs may have broad applicability for multimodal liver volumetry and hepatic tissue characterization
CIDI-Lung-Seg: A Single-Click Annotation Tool for Automatic Delineation of Lungs from CT Scans
Accurate and fast extraction of lung volumes from computed tomography (CT)
scans remains in a great demand in the clinical environment because the
available methods fail to provide a generic solution due to wide anatomical
variations of lungs and existence of pathologies. Manual annotation, current
gold standard, is time consuming and often subject to human bias. On the other
hand, current state-of-the-art fully automated lung segmentation methods fail
to make their way into the clinical practice due to their inability to
efficiently incorporate human input for handling misclassifications and praxis.
This paper presents a lung annotation tool for CT images that is interactive,
efficient, and robust. The proposed annotation tool produces an "as accurate as
possible" initial annotation based on the fuzzy-connectedness image
segmentation, followed by efficient manual fixation of the initial extraction
if deemed necessary by the practitioner. To provide maximum flexibility to the
users, our annotation tool is supported in three major operating systems
(Windows, Linux, and the Mac OS X). The quantitative results comparing our free
software with commercially available lung segmentation tools show higher degree
of consistency and precision of our software with a considerable potential to
enhance the performance of routine clinical tasks.Comment: 4 pages, 6 figures; to appear in the proceedings of 36th Annual
International Conference of the IEEE Engineering in Medicine and Biology
Society (EMBC 2014
Visual and Contextual Modeling for the Detection of Repeated Mild Traumatic Brain Injury.
Currently, there is a lack of computational methods for the evaluation of mild traumatic brain injury (mTBI) from magnetic resonance imaging (MRI). Further, the development of automated analyses has been hindered by the subtle nature of mTBI abnormalities, which appear as low contrast MR regions. This paper proposes an approach that is able to detect mTBI lesions by combining both the high-level context and low-level visual information. The contextual model estimates the progression of the disease using subject information, such as the time since injury and the knowledge about the location of mTBI. The visual model utilizes texture features in MRI along with a probabilistic support vector machine to maximize the discrimination in unimodal MR images. These two models are fused to obtain a final estimate of the locations of the mTBI lesion. The models are tested using a novel rodent model of repeated mTBI dataset. The experimental results demonstrate that the fusion of both contextual and visual textural features outperforms other state-of-the-art approaches. Clinically, our approach has the potential to benefit both clinicians by speeding diagnosis and patients by improving clinical care
- …