5,173 research outputs found

    Two-Way Interactive Refinement of Segmented Medical Volumes

    Get PDF
    For complex medical image segmentation tasks which also require high accuracy, prior information must usually be generated in order to initialize and confine the action of the computational tools. This can be obtained by task oriented specialization layers operating within automatic segmentation techniques or by advanced exploitation of user- data interaction, in this case the segmentation technique can conserve generality and results can be inherently validated by the user itself, in the measure he is allowed to effectively steer the process towards the desired result. In this paper we present a highly accurate and still general morphological 3D segmentation system where rapid convergence to the desired result is guaranteed by a two-way interactive segmentation-refinement loop, where the flow of prior information is inverted (from computing tools to the user) in the refinement phase in order to help the user to quickly select most effective refinement strategies

    Interactive Segmentation for COVID-19 Infection Quantification on Longitudinal CT scans

    Full text link
    Consistent segmentation of COVID-19 patient's CT scans across multiple time points is essential to assess disease progression and response to therapy accurately. Existing automatic and interactive segmentation models for medical images only use data from a single time point (static). However, valuable segmentation information from previous time points is often not used to aid the segmentation of a patient's follow-up scans. Also, fully automatic segmentation techniques frequently produce results that would need further editing for clinical use. In this work, we propose a new single network model for interactive segmentation that fully utilizes all available past information to refine the segmentation of follow-up scans. In the first segmentation round, our model takes 3D volumes of medical images from two-time points (target and reference) as concatenated slices with the additional reference time point segmentation as a guide to segment the target scan. In subsequent segmentation refinement rounds, user feedback in the form of scribbles that correct the segmentation and the target's previous segmentation results are additionally fed into the model. This ensures that the segmentation information from previous refinement rounds is retained. Experimental results on our in-house multiclass longitudinal COVID-19 dataset show that the proposed model outperforms its static version and can assist in localizing COVID-19 infections in patient's follow-up scans.Comment: 10 pages, 11 figures, 4 table

    Unwind: Interactive Fish Straightening

    Full text link
    The ScanAllFish project is a large-scale effort to scan all the world's 33,100 known species of fishes. It has already generated thousands of volumetric CT scans of fish species which are available on open access platforms such as the Open Science Framework. To achieve a scanning rate required for a project of this magnitude, many specimens are grouped together into a single tube and scanned all at once. The resulting data contain many fish which are often bent and twisted to fit into the scanner. Our system, Unwind, is a novel interactive visualization and processing tool which extracts, unbends, and untwists volumetric images of fish with minimal user interaction. Our approach enables scientists to interactively unwarp these volumes to remove the undesired torque and bending using a piecewise-linear skeleton extracted by averaging isosurfaces of a harmonic function connecting the head and tail of each fish. The result is a volumetric dataset of a individual, straight fish in a canonical pose defined by the marine biologist expert user. We have developed Unwind in collaboration with a team of marine biologists: Our system has been deployed in their labs, and is presently being used for dataset construction, biomechanical analysis, and the generation of figures for scientific publication

    Accurate and fast 3D interactive segmentation system applied to MR brain quantification

    Get PDF
    This work presents an efficient interactive segmentation system for volumetric data-sets based on advanced 3D morphological analyses and an interaction paradigm that allows a good match with user intentions. This system has been designed to produce accurate results under the complete control of the user, to minimize the interaction time and to address a generality of 3D segmentation tasks. The system has been tested and compared with other softwares on normal MR brain structure quantification and on a challenging clinical setting pointed to the detection of the presence of subtle brain atrophy associated to primitive immunodeficiency (PID)

    Automatic generation of statistical pose and shape models for articulated joints

    Get PDF
    Statistical analysis of motion patterns of body joints is potentially useful for detecting and quantifying pathologies. However, building a statistical motion model across different subjects remains a challenging task, especially for a complex joint like the wrist. We present a novel framework for simultaneous registration and segmentation of multiple 3-D (CT or MR) volumes of different subjects at various articulated positions. The framework starts with a pose model generated from 3-D volumes captured at different articulated positions of a single subject (template). This initial pose model is used to register the template volume to image volumes from new subjects. During this process, the Grow-Cut algorithm is used in an iterative refinement of the segmentation of the bone along with the pose parameters. As each new subject is registered and segmented, the pose model is updated, improving the accuracy of successive registrations. We applied the algorithm to CT images of the wrist from 25 subjects, each at five different wrist positions and demonstrated that it performed robustly and accurately. More importantly, the resulting segmentations allowed a statistical pose model of the carpal bones to be generated automatically without interaction. The evaluation results show that our proposed framework achieved accurate registration with an average mean target registration error of mm. The automatic segmentation results also show high consistency with the ground truth obtained semi-automatically. Furthermore, we demonstrated the capability of the resulting statistical pose and shape models by using them to generate a measurement tool for scaphoid-lunate dissociation diagnosis, which achieved 90% sensitivity and specificity

    Crepuscular Rays for Tumor Accessibility Planning

    Get PDF
    • …
    corecore