60 research outputs found

    DeepOrgan: Multi-level Deep Convolutional Networks for Automated Pancreas Segmentation

    Full text link
    Automatic organ segmentation is an important yet challenging problem for medical image analysis. The pancreas is an abdominal organ with very high anatomical variability. This inhibits previous segmentation methods from achieving high accuracies, especially compared to other organs such as the liver, heart or kidneys. In this paper, we present a probabilistic bottom-up approach for pancreas segmentation in abdominal computed tomography (CT) scans, using multi-level deep convolutional networks (ConvNets). We propose and evaluate several variations of deep ConvNets in the context of hierarchical, coarse-to-fine classification on image patches and regions, i.e. superpixels. We first present a dense labeling of local image patches via PConvNetP{-}\mathrm{ConvNet} and nearest neighbor fusion. Then we describe a regional ConvNet (R1ConvNetR_1{-}\mathrm{ConvNet}) that samples a set of bounding boxes around each image superpixel at different scales of contexts in a "zoom-out" fashion. Our ConvNets learn to assign class probabilities for each superpixel region of being pancreas. Last, we study a stacked R2ConvNetR_2{-}\mathrm{ConvNet} leveraging the joint space of CT intensities and the PConvNetP{-}\mathrm{ConvNet} dense probability maps. Both 3D Gaussian smoothing and 2D conditional random fields are exploited as structured predictions for post-processing. We evaluate on CT images of 82 patients in 4-fold cross-validation. We achieve a Dice Similarity Coefficient of 83.6±\pm6.3% in training and 71.8±\pm10.7% in testing.Comment: To be presented at MICCAI 2015 - 18th International Conference on Medical Computing and Computer Assisted Interventions, Munich, German

    The Potential Dangers of Artificial Intelligence for Radiology and Radiologists

    Get PDF
    With the advent of artificial intelligence (AI) across many fields and subspecialties, there are considerable expectations for transformative impact. However, there are also concerns regarding the potential abuse of AI. Many scientists have been worried about the dangers of AI leading to “biased” conclusions, in part because of the enthusiasm of the inventor or overenthusiasm among the general public. Here, though, we consider some scenarios in which people may intend to cause potential errors within data sets of analyzed information, resulting in incorrect conclusions and leading to potential problems with patient care and outcomes

    Anatomy-specific classification of medical images using deep convolutional nets

    Full text link
    Automated classification of human anatomy is an important prerequisite for many computer-aided diagnosis systems. The spatial complexity and variability of anatomy throughout the human body makes classification difficult. "Deep learning" methods such as convolutional networks (ConvNets) outperform other state-of-the-art methods in image classification tasks. In this work, we present a method for organ- or body-part-specific anatomical classification of medical images acquired using computed tomography (CT) with ConvNets. We train a ConvNet, using 4,298 separate axial 2D key-images to learn 5 anatomical classes. Key-images were mined from a hospital PACS archive, using a set of 1,675 patients. We show that a data augmentation approach can help to enrich the data set and improve classification performance. Using ConvNets and data augmentation, we achieve anatomy-specific classification error of 5.9 % and area-under-the-curve (AUC) values of an average of 0.998 in testing. We demonstrate that deep learning can be used to train very reliable and accurate classifiers that could initialize further computer-aided diagnosis.Comment: Presented at: 2015 IEEE International Symposium on Biomedical Imaging, April 16-19, 2015, New York Marriott at Brooklyn Bridge, NY, US

    Interleaved text/image Deep Mining on a large-scale radiology database

    Full text link
    Despite tremendous progress in computer vision, effec-tive learning on very large-scale (> 100K patients) medi-cal image databases has been vastly hindered. We present an interleaved text/image deep learning system to extract and mine the semantic interactions of radiology images and reports from a national research hospital’s picture archiv-ing and communication system. Instead of using full 3D medical volumes, we focus on a collection of representa-tive ~216K 2D key images/slices (selected by clinicians for diagnostic reference) with text-driven scalar and vector la-bels. Our system interleaves between unsupervised learn-ing (e.g., latent Dirichlet allocation, recurrent neural net language models) on document- and sentence-level texts to generate semantic labels and supervised learning via deep convolutional neural networks (CNNs) to map from images to label spaces. Disease-related key words can be predicted for radiology images in a retrieval manner. We have demon-strated promising quantitative and qualitative results. The large-scale datasets of extracted key images and their cat-egorization, embedded vector labels and sentence descrip-tions can be harnessed to alleviate the deep learning “data-hungry ” obstacle in the medical domain
    corecore