1,036 research outputs found

    Domain specific cues improve robustness of deep learning based segmentation of ct volumes

    Full text link
    Machine Learning has considerably improved medical image analysis in the past years. Although data-driven approaches are intrinsically adaptive and thus, generic, they often do not perform the same way on data from different imaging modalities. In particular Computed tomography (CT) data poses many challenges to medical image segmentation based on convolutional neural networks (CNNs), mostly due to the broad dynamic range of intensities and the varying number of recorded slices of CT volumes. In this paper, we address these issues with a framework that combines domain-specific data preprocessing and augmentation with state-of-the-art CNN architectures. The focus is not limited to optimise the score, but also to stabilise the prediction performance since this is a mandatory requirement for use in automated and semi-automated workflows in the clinical environment. The framework is validated with an architecture comparison to show CNN architecture-independent effects of our framework functionality. We compare a modified U-Net and a modified Mixed-Scale Dense Network (MS-D Net) to compare dilated convolutions for parallel multi-scale processing to the U-Net approach based on traditional scaling operations. Finally, we propose an ensemble model combining the strengths of different individual methods. The framework performs well on a range of tasks such as liver and kidney segmentation, without significant differences in prediction performance on strongly differing volume sizes and varying slice thickness. Thus our framework is an essential step towards performing robust segmentation of unknown real-world samples

    Automated axial right ventricle to left ventricle diameter ratio computation in computed tomography pulmonary angiography

    Full text link
    Automated medical image analysis requires methods to localize anatomic structures in the presence of normal interpatient variability, pathology, and the different protocols used to acquire images for different clinical settings. Recent advances have improved object detection in the context of natural images, but they have not been adapted to the 3D context of medical images. In this paper we present a 2.5D object detector designed to locate, without any user interaction, the left and right heart ventricles in Computed Tomography Pulmonary Angiography (CTPA) images. A 2D object detector is trained to find ventricles on axial slices. Those detections are automatically clustered according to their size and position. The cluster with highest score, representing the 3D location of the ventricle, is then selected. The proposed method is validated in 403 CTPA studies obtained in patients with clinically suspected pulmonary embolism. Both ventricles are properly detected in 94.7% of the cases. The proposed method is very generic and can be easily adapted to detect other structures in medical images

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Segment Anything Model (SAM) for Radiation Oncology

    Full text link
    In this study, we evaluate the performance of the Segment Anything Model (SAM) model in clinical radiotherapy. We collected real clinical cases from four regions at the Mayo Clinic: prostate, lung, gastrointestinal, and head \& neck, which are typical treatment sites in radiation oncology. For each case, we selected the OARs of concern in radiotherapy planning and compared the Dice and Jaccard outcomes between clinical manual delineation, automatic segmentation using SAM's "segment anything" mode, and automatic segmentation using SAM with box prompt. Our results indicate that SAM performs better in automatic segmentation for the prostate and lung regions, while its performance in the gastrointestinal and head \& neck regions was relatively inferior. When considering the size of the organ and the clarity of its boundary, SAM displays better performance for larger organs with clear boundaries, such as the lung and liver, and worse for smaller organs with unclear boundaries, like the parotid and cochlea. These findings align with the generally accepted variations in difficulty level associated with manual delineation of different organs at different sites in clinical radiotherapy. Given that SAM, a single trained model, could handle the delineation of OARs in four regions, these results also demonstrate SAM's robust generalization capabilities in automatic segmentation for radiotherapy, i.e., achieving delineation of different radiotherapy OARs using a generic automatic segmentation model. SAM's generalization capabilities across different regions make it technically feasible to develop a generic model for automatic segmentation in radiotherapy

    DEEP LEARNING IN COMPUTER-ASSISTED MAXILLOFACIAL SURGERY

    Get PDF
    corecore