5,173 research outputs found
Two-Way Interactive Refinement of Segmented Medical Volumes
For complex medical image segmentation tasks which also require high accuracy, prior information must usually be generated in order to initialize and confine the action of the computational tools. This can be obtained by task oriented specialization layers operating within automatic segmentation techniques or by advanced exploitation of user- data interaction, in this case the segmentation technique can conserve generality and results can be inherently validated by the user itself, in the measure he is allowed to effectively steer the process towards the desired result. In this paper we present a highly accurate and still general morphological 3D segmentation system where rapid convergence to the desired result is guaranteed by a two-way interactive segmentation-refinement loop, where the flow of prior information is inverted (from computing tools to the user) in the refinement phase in order to help the user to quickly select most effective refinement strategies
Interactive Segmentation for COVID-19 Infection Quantification on Longitudinal CT scans
Consistent segmentation of COVID-19 patient's CT scans across multiple time
points is essential to assess disease progression and response to therapy
accurately. Existing automatic and interactive segmentation models for medical
images only use data from a single time point (static). However, valuable
segmentation information from previous time points is often not used to aid the
segmentation of a patient's follow-up scans. Also, fully automatic segmentation
techniques frequently produce results that would need further editing for
clinical use. In this work, we propose a new single network model for
interactive segmentation that fully utilizes all available past information to
refine the segmentation of follow-up scans. In the first segmentation round,
our model takes 3D volumes of medical images from two-time points (target and
reference) as concatenated slices with the additional reference time point
segmentation as a guide to segment the target scan. In subsequent segmentation
refinement rounds, user feedback in the form of scribbles that correct the
segmentation and the target's previous segmentation results are additionally
fed into the model. This ensures that the segmentation information from
previous refinement rounds is retained. Experimental results on our in-house
multiclass longitudinal COVID-19 dataset show that the proposed model
outperforms its static version and can assist in localizing COVID-19 infections
in patient's follow-up scans.Comment: 10 pages, 11 figures, 4 table
Unwind: Interactive Fish Straightening
The ScanAllFish project is a large-scale effort to scan all the world's
33,100 known species of fishes. It has already generated thousands of
volumetric CT scans of fish species which are available on open access
platforms such as the Open Science Framework. To achieve a scanning rate
required for a project of this magnitude, many specimens are grouped together
into a single tube and scanned all at once. The resulting data contain many
fish which are often bent and twisted to fit into the scanner. Our system,
Unwind, is a novel interactive visualization and processing tool which
extracts, unbends, and untwists volumetric images of fish with minimal user
interaction. Our approach enables scientists to interactively unwarp these
volumes to remove the undesired torque and bending using a piecewise-linear
skeleton extracted by averaging isosurfaces of a harmonic function connecting
the head and tail of each fish. The result is a volumetric dataset of a
individual, straight fish in a canonical pose defined by the marine biologist
expert user. We have developed Unwind in collaboration with a team of marine
biologists: Our system has been deployed in their labs, and is presently being
used for dataset construction, biomechanical analysis, and the generation of
figures for scientific publication
Accurate and fast 3D interactive segmentation system applied to MR brain quantification
This work presents an efficient interactive segmentation system for volumetric data-sets based on advanced 3D morphological analyses and an interaction paradigm that allows a good match with user intentions. This system has been designed to produce accurate results under the complete control of the user, to minimize the interaction time and to address a generality of 3D segmentation tasks. The system has been tested and compared with other softwares on normal MR brain structure quantification and on a challenging clinical setting pointed to the detection of the presence of subtle brain atrophy associated to primitive immunodeficiency (PID)
Automatic generation of statistical pose and shape models for articulated joints
Statistical analysis of motion patterns of body joints is potentially useful for detecting and quantifying pathologies. However, building a statistical motion model across different subjects remains a challenging task, especially for a complex joint like the wrist. We present a novel framework for simultaneous registration and segmentation of multiple 3-D (CT or MR) volumes of different subjects at various articulated positions. The framework starts with a pose model generated from 3-D volumes captured at different articulated positions of a single subject (template). This initial pose model is used to register the template volume to image volumes from new subjects. During this process, the Grow-Cut algorithm is used in an iterative refinement of the segmentation of the bone along with the pose parameters. As each new subject is registered and segmented, the pose model is updated, improving the accuracy of successive registrations. We applied the algorithm to CT images of the wrist from 25 subjects, each at five different wrist positions and demonstrated that it performed robustly and accurately. More importantly, the resulting segmentations allowed a statistical pose model of the carpal bones to be generated automatically without interaction. The evaluation results show that our proposed framework achieved accurate registration with an average mean target registration error of mm. The automatic segmentation results also show high consistency with the ground truth obtained semi-automatically. Furthermore, we demonstrated the capability of the resulting statistical pose and shape models by using them to generate a measurement tool for scaphoid-lunate dissociation diagnosis, which achieved 90% sensitivity and specificity
Recommended from our members
Automated CT and MRI Liver Segmentation and Biometry Using a Generalized Convolutional Neural Network.
PurposeTo assess feasibility of training a convolutional neural network (CNN) to automate liver segmentation across different imaging modalities and techniques used in clinical practice and apply this to enable automation of liver biometry.MethodsWe trained a 2D U-Net CNN for liver segmentation in two stages using 330 abdominal MRI and CT exams acquired at our institution. First, we trained the neural network with non-contrast multi-echo spoiled-gradient-echo (SGPR)images with 300 MRI exams to provide multiple signal-weightings. Then, we used transfer learning to generalize the CNN with additional images from 30 contrast-enhanced MRI and CT exams.We assessed the performance of the CNN using a distinct multi-institutional data set curated from multiple sources (n = 498 subjects). Segmentation accuracy was evaluated by computing Dice scores. Utilizing these segmentations, we computed liver volume from CT and T1-weighted (T1w) MRI exams, and estimated hepatic proton- density-fat-fraction (PDFF) from multi-echo T2*w MRI exams. We compared quantitative volumetry and PDFF estimates between automated and manual segmentation using Pearson correlation and Bland-Altman statistics.ResultsDice scores were 0.94 ± 0.06 for CT (n = 230), 0.95 ± 0.03 (n = 100) for T1w MR, and 0.92 ± 0.05 for T2*w MR (n = 169). Liver volume measured by manual and automated segmentation agreed closely for CT (95% limit-of-agreement (LoA) = [-298 mL, 180 mL]) and T1w MR (LoA = [-358 mL, 180 mL]). Hepatic PDFF measured by the two segmentations also agreed closely (LoA = [-0.62%, 0.80%]).ConclusionsUtilizing a transfer-learning strategy, we have demonstrated the feasibility of a CNN to be generalized to perform liver segmentations across different imaging techniques and modalities. With further refinement and validation, CNNs may have broad applicability for multimodal liver volumetry and hepatic tissue characterization
- …