1,647 research outputs found

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Morphology of the canine omentum, part 2: the omental bursa and its compartments materialized and explored by a novel technique

    Get PDF
    The canine omental bursa is a virtual cavity enclosed by the greater and lesser omentum. While previous representations of this bursa were always purely schematic, a novel casting technique was developed to depict the three-dimensional organization of the omental bursa more consistently. A self-expanding polyurethane-based foam was injected into the omental bursa through the omental foramen in six dogs. After curing and the subsequent maceration of the surrounded tissues, the obtained three-dimensional casts could clearly and in a reproducible way reveal the omental vestibule, its caudal recess and the three compartments of the splenic recess. The cast proved to be an invaluable study tool to identify the landmarks that define the enveloping omentum. In addition, the polyurethane material can easily be discerned on computed tomographic images. When the casting technique is preceded by vascular injections, the blood vessels that supply the omentum can be outlined as well

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    Advanced 3D “Modeling” and “Printing” for the Surgical Planning of a Successful Case of Thoraco-Omphalopagus Conjoined Twins Separation

    Get PDF
    The surgical separation of two Conjoined Twins is a particularly complex operation. Surgical times are particularly long and post-operative complications are very frequent in this type of procedure. We report a clinical case of surgical separation of two thoraco-omphalopagus conjoined twins in which, thanks to the use of (3D) three dimensional technologies, we were able to significantly reduce operative times and improve clinical outcomes

    Vessel segmentation for automatic registration of untracked laparoscopic ultrasound to CT of the liver

    Get PDF
    PURPOSE: Registration of Laparoscopic Ultrasound (LUS) to a pre-operative scan such as Computed Tomography (CT) using blood vessel information has been proposed as a method to enable image-guidance for laparoscopic liver resection. Currently, there are solutions for this problem that can potentially enable clinical translation by bypassing the need for a manual initialisation and tracking information. However, no reliable framework for the segmentation of vessels in 2D untracked LUS images has been presented. METHODS: We propose the use of 2D UNet for the segmentation of liver vessels in 2D LUS images. We integrate these results in a previously developed registration method, and show the feasibility of a fully automatic initialisation to the LUS to CT registration problem without a tracking device. RESULTS: We validate our segmentation using LUS data from 6 patients. We test multiple models by placing patient datasets into different combinations of training, testing and hold-out, and obtain mean Dice scores ranging from 0.543 to 0.706. Using these segmentations, we obtain registration accuracies between 6.3 and 16.6 mm in 50% of cases. CONCLUSIONS: We demonstrate the first instance of deep learning (DL) for the segmentation of liver vessels in LUS. Our results show the feasibility of UNet in detecting multiple vessel instances in 2D LUS images, and potentially automating a LUS to CT registration pipeline

    Crepuscular Rays for Tumor Accessibility Planning

    Get PDF

    Pancreas MRI segmentation into head, body, and tail enables regional quantitative analysis of heterogeneous disease

    Get PDF
    Background: Quantitative imaging studies of the pancreas have often targeted the three main anatomical segments, head, body, and tail, using manual region of interest strategies to assess geographic heterogeneity. Existing automated analyses have implemented whole-organ segmentation, providing overall quantification but failing to address spatial heterogeneity. Purpose: To develop and validate an automated method for pancreas segmentation into head, body, and tail subregions in abdominal MRI. Study Type: Retrospective. Subjects: One hundred and fifty nominally healthy subjects from UK Biobank (100 subjects for method development and 50 subjects for validation). A separate 390 UK Biobank triples of subjects including type 2 diabetes mellitus (T2DM) subjects and matched nondiabetics. Field strength/Sequence: A 1.5 T, three-dimensional two-point Dixon sequence (for segmentation and volume assessment) and a two-dimensional axial multiecho gradient-recalled echo sequence. Assessment: Pancreas segments were annotated by four raters on the validation cohort. Intrarater agreement and interrater agreement were reported using Dice overlap (Dice similarity coefficient [DSC]). A segmentation method based on template registration was developed and evaluated against annotations. Results on regional pancreatic fat assessment are also presented, by intersecting the three-dimensional parts segmentation with one available proton density fat fraction (PDFF) image. Statistical Test: Wilcoxon signed rank test and Mann–Whitney U-test for comparisons. DSC and volume differences for evaluation. A P value  Results: Good intrarater (DSC mean, head: 0.982, body: 0.940, tail: 0.961) agreement and interrater (DSC mean, head: 0.968, body: 0.905, tail: 0.943) agreement were observed. No differences (DSC, head: P = 0.4358, body: P = 0.0992, tail: P = 0.1080) were observed between the manual annotations and our method's segmentations (DSC mean, head: 0.965, body: 0.893, tail: 0.934). Pancreatic body PDFF was different between T2DM and nondiabetics matched by body mass index. Data Conclusion: The developed segmentation's performance was no different from manual annotations. Application on type 2 diabetes subjects showed potential for assessing pancreatic disease heterogeneity. Level of Evidence: 4 Technical Efficacy Stage: 3
    • …
    corecore