1,387 research outputs found

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Mediastinal lymph nodes segmentation using 3D convolutional neural network ensembles and anatomical priors guiding

    Get PDF
    As lung cancer evolves, the presence of potentially malignant lymph nodes must be assessed to properly estimate disease progression and select the best treatment strategy. A method for accurate and automatic segmentation is hence decisive for quantitatively describing lymph nodes. In this study, the use of 3D convolutional neural networks, either through slab-wise schemes or the leveraging of downsampled entire volumes, is investigated. As lymph nodes have similar attenuation values to nearby anatomical structures, we use the knowledge of other organs as prior information to guide the segmentation. To assess the performances, a 5-fold cross-validation strategy was followed over a dataset of 120 contrast-enhanced CT volumes. For the 1178 lymph nodes with a short-axis diameter ≥10 mm, our best-performing approach reached a patient-wise recall of 92%, a false positive per patient ratio of 5 and a segmentation overlap of 80.5%. Fusing a slab-wise and a full volume approach within an ensemble scheme generated the best performances. The anatomical priors guiding strategy is promising, yet a larger set than four organs appears needed to generate an optimal benefit. A larger dataset is also mandatory given the wide range of expressions a lymph node can exhibit (i.e. shape, location and attenuation).publishedVersio

    2D View Aggregation for Lymph Node Detection Using a Shallow Hierarchy of Linear Classifiers

    Full text link
    Enlarged lymph nodes (LNs) can provide important information for cancer diagnosis, staging, and measuring treatment reactions, making automated detection a highly sought goal. In this paper, we propose a new algorithm representation of decomposing the LN detection problem into a set of 2D object detection subtasks on sampled CT slices, largely alleviating the curse of dimensionality issue. Our 2D detection can be effectively formulated as linear classification on a single image feature type of Histogram of Oriented Gradients (HOG), covering a moderate field-of-view of 45 by 45 voxels. We exploit both simple pooling and sparse linear fusion schemes to aggregate these 2D detection scores for the final 3D LN detection. In this manner, detection is more tractable and does not need to perform perfectly at instance level (as weak hypotheses) since our aggregation process will robustly harness collective information for LN detection. Two datasets (90 patients with 389 mediastinal LNs and 86 patients with 595 abdominal LNs) are used for validation. Cross-validation demonstrates 78.0% sensitivity at 6 false positives/volume (FP/vol.) (86.1% at 10 FP/vol.) and 73.1% sensitivity at 6 FP/vol. (87.2% at 10 FP/vol.), for the mediastinal and abdominal datasets respectively. Our results compare favorably to previous state-of-the-art methods.Comment: This article will be presented at MICCAI (Medical Image Computing and Computer-Assisted Intervention) 201

    DeepOrgan: Multi-level Deep Convolutional Networks for Automated Pancreas Segmentation

    Full text link
    Automatic organ segmentation is an important yet challenging problem for medical image analysis. The pancreas is an abdominal organ with very high anatomical variability. This inhibits previous segmentation methods from achieving high accuracies, especially compared to other organs such as the liver, heart or kidneys. In this paper, we present a probabilistic bottom-up approach for pancreas segmentation in abdominal computed tomography (CT) scans, using multi-level deep convolutional networks (ConvNets). We propose and evaluate several variations of deep ConvNets in the context of hierarchical, coarse-to-fine classification on image patches and regions, i.e. superpixels. We first present a dense labeling of local image patches via P−ConvNetP{-}\mathrm{ConvNet} and nearest neighbor fusion. Then we describe a regional ConvNet (R1−ConvNetR_1{-}\mathrm{ConvNet}) that samples a set of bounding boxes around each image superpixel at different scales of contexts in a "zoom-out" fashion. Our ConvNets learn to assign class probabilities for each superpixel region of being pancreas. Last, we study a stacked R2−ConvNetR_2{-}\mathrm{ConvNet} leveraging the joint space of CT intensities and the P−ConvNetP{-}\mathrm{ConvNet} dense probability maps. Both 3D Gaussian smoothing and 2D conditional random fields are exploited as structured predictions for post-processing. We evaluate on CT images of 82 patients in 4-fold cross-validation. We achieve a Dice Similarity Coefficient of 83.6±\pm6.3% in training and 71.8±\pm10.7% in testing.Comment: To be presented at MICCAI 2015 - 18th International Conference on Medical Computing and Computer Assisted Interventions, Munich, German
    • …
    corecore