2,570 research outputs found

    Recurrent Saliency Transformation Network: Incorporating Multi-Stage Visual Cues for Small Organ Segmentation

    Full text link
    We aim at segmenting small organs (e.g., the pancreas) from abdominal CT scans. As the target often occupies a relatively small region in the input image, deep neural networks can be easily confused by the complex and variable background. To alleviate this, researchers proposed a coarse-to-fine approach, which used prediction from the first (coarse) stage to indicate a smaller input region for the second (fine) stage. Despite its effectiveness, this algorithm dealt with two stages individually, which lacked optimizing a global energy function, and limited its ability to incorporate multi-stage visual cues. Missing contextual information led to unsatisfying convergence in iterations, and that the fine stage sometimes produced even lower segmentation accuracy than the coarse stage. This paper presents a Recurrent Saliency Transformation Network. The key innovation is a saliency transformation module, which repeatedly converts the segmentation probability map from the previous iteration as spatial weights and applies these weights to the current iteration. This brings us two-fold benefits. In training, it allows joint optimization over the deep networks dealing with different input scales. In testing, it propagates multi-stage visual information throughout iterations to improve segmentation accuracy. Experiments in the NIH pancreas segmentation dataset demonstrate the state-of-the-art accuracy, which outperforms the previous best by an average of over 2%. Much higher accuracies are also reported on several small organs in a larger dataset collected by ourselves. In addition, our approach enjoys better convergence properties, making it more efficient and reliable in practice.Comment: Accepted to CVPR 2018 (10 pages, 6 figures

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Deep Learning Framework for Spleen Volume Estimation from 2D Cross-sectional Views

    Full text link
    Abnormal spleen enlargement (splenomegaly) is regarded as a clinical indicator for a range of conditions, including liver disease, cancer and blood diseases. While spleen length measured from ultrasound images is a commonly used surrogate for spleen size, spleen volume remains the gold standard metric for assessing splenomegaly and the severity of related clinical conditions. Computed tomography is the main imaging modality for measuring spleen volume, but it is less accessible in areas where there is a high prevalence of splenomegaly (e.g., the Global South). Our objective was to enable automated spleen volume measurement from 2D cross-sectional segmentations, which can be obtained from ultrasound imaging. In this study, we describe a variational autoencoder-based framework to measure spleen volume from single- or dual-view 2D spleen segmentations. We propose and evaluate three volume estimation methods within this framework. We also demonstrate how 95% confidence intervals of volume estimates can be produced to make our method more clinically useful. Our best model achieved mean relative volume accuracies of 86.62% and 92.58% for single- and dual-view segmentations, respectively, surpassing the performance of the clinical standard approach of linear regression using manual measurements and a comparative deep learning-based 2D-3D reconstruction-based approach. The proposed spleen volume estimation framework can be integrated into standard clinical workflows which currently use 2D ultrasound images to measure spleen length. To the best of our knowledge, this is the first work to achieve direct 3D spleen volume estimation from 2D spleen segmentations.Comment: 22 pages, 7 figure

    Computer aided diagnosis system for breast cancer using deep learning.

    Get PDF
    The recent rise of big data technology surrounding the electronic systems and developed toolkits gave birth to new promises for Artificial Intelligence (AI). With the continuous use of data-centric systems and machines in our lives, such as social media, surveys, emails, reports, etc., there is no doubt that data has gained the center of attention by scientists and motivated them to provide more decision-making and operational support systems across multiple domains. With the recent breakthroughs in artificial intelligence, the use of machine learning and deep learning models have achieved remarkable advances in computer vision, ecommerce, cybersecurity, and healthcare. Particularly, numerous applications provided efficient solutions to assist radiologists and doctors for medical imaging analysis, which has remained the essence of the visual representation that is used to construct the final observation and diagnosis. Medical research in cancerology and oncology has been recently blended with the knowledge gained from computer engineering and data science experts. In this context, an automatic assistance or commonly known as Computer-aided Diagnosis (CAD) system has become a popular area of research and development in the last decades. As a result, the CAD systems have been developed using multidisciplinary knowledge and expertise and they have been used to analyze the patient information to assist clinicians and practitioners in their decision-making process. Treating and preventing cancer remains a crucial task that radiologists and oncologists face every day to detect and investigate abnormal tumors. Therefore, a CAD system could be developed to provide decision support for many applications in the cancer patient care processes, such as lesion detection, characterization, cancer staging, tumors assessment, recurrence, and prognosis prediction. Breast cancer has been considered one of the common types of cancers in females across the world. It was also considered the leading cause of mortality among women, and it has been increased drastically every year. Early detection and diagnosis of abnormalities in screened breasts has been acknowledged as the optimal solution to examine the risk of developing breast cancer and thus reduce the increasing mortality rate. Accordingly, this dissertation proposes a new state-of-the-art CAD system for breast cancer diagnosis that is based on deep learning technology and cutting-edge computer vision techniques. Mammography screening has been recognized as the most effective tool to early detect breast lesions for reducing the mortality rate. It helps reveal abnormalities in the breast such as Mass lesion, Architectural Distortion, Microcalcification. With the number of daily patients that were screened is continuously increasing, having a second reading tool or assistance system could leverage the process of breast cancer diagnosis. Mammograms could be obtained using different modalities such as X-ray scanner and Full-Field Digital mammography (FFDM) system. The quality of the mammograms, the characteristics of the breast (i.e., density, size) or/and the tumors (i.e., location, size, shape) could affect the final diagnosis. Therefore, radiologists could miss the lesions and consequently they could generate false detection and diagnosis. Therefore, this work was motivated to improve the reading of mammograms in order to increase the accuracy of the challenging tasks. The efforts presented in this work consists of new design and implementation of neural network models for a fully integrated CAD system dedicated to breast cancer diagnosis. The approach is designed to automatically detect and identify breast lesions from the entire mammograms at a first step using fusion models’ methodology. Then, the second step only focuses on the Mass lesions and thus the proposed system should segment the detected bounding boxes of the Mass lesions to mask their background. A new neural network architecture for mass segmentation was suggested that was integrated with a new data enhancement and augmentation technique. Finally, a third stage was conducted using a stacked ensemble of neural networks for classifying and diagnosing the pathology (i.e., malignant, or benign), the Breast Imaging Reporting and Data System (BI-RADS) assessment score (i.e., from 2 to 6), or/and the shape (i.e., round, oval, lobulated, irregular) of the segmented breast lesions. Another contribution was achieved by applying the first stage of the CAD system for a retrospective analysis and comparison of the model on Prior mammograms of a private dataset. The work was conducted by joining the learning of the detection and classification model with the image-to-image mapping between Prior and Current screening views. Each step presented in the CAD system was evaluated and tested on public and private datasets and consequently the results have been fairly compared with benchmark mammography datasets. The integrated framework for the CAD system was also tested for deployment and showcase. The performance of the CAD system for the detection and identification of breast masses reached an overall accuracy of 97%. The segmentation of breast masses was evaluated together with the previous stage and the approach achieved an overall performance of 92%. Finally, the classification and diagnosis step that defines the outcome of the CAD system reached an overall pathology classification accuracy of 96%, a BIRADS categorization accuracy of 93%, and a shape classification accuracy of 90%. Results given in this dissertation indicate that our suggested integrated framework might surpass the current deep learning approaches by using all the proposed automated steps. Limitations of the proposed work could occur on the long training time of the different methods which is due to the high computation of the developed neural networks that have a huge number of the trainable parameters. Future works can include new orientations of the methodologies by combining different mammography datasets and improving the long training of deep learning models. Moreover, motivations could upgrade the CAD system by using annotated datasets to integrate more breast cancer lesions such as Calcification and Architectural distortion. The proposed framework was first developed to help detect and identify suspicious breast lesions in X-ray mammograms. Next, the work focused only on Mass lesions and segment the detected ROIs to remove the tumor’s background and highlight the contours, the texture, and the shape of the lesions. Finally, the diagnostic decision was predicted to classify the pathology of the lesions and investigate other characteristics such as the tumors’ grading assessment and type of the shape. The dissertation presented a CAD system to assist doctors and experts to identify the risk of breast cancer presence. Overall, the proposed CAD method incorporates the advances of image processing, deep learning, and image-to-image translation for a biomedical application

    Automatic Multi-organ Segmentation on Abdominal CT with Dense V-networks

    Get PDF
    Automatic segmentation of abdominal anatomy on computed tomography (CT) images can support diagnosis, treatment planning and treatment delivery workflows. Segmentation methods using statistical models and multi-atlas label fusion (MALF) require inter-subject image registrations which are challenging for abdominal images, but alternative methods without registration have not yet achieved higher accuracy for most abdominal organs. We present a registration-free deeplearning- based segmentation algorithm for eight organs that are relevant for navigation in endoscopic pancreatic and biliary procedures, including the pancreas, the GI tract (esophagus, stomach, duodenum) and surrounding organs (liver, spleen, left kidney, gallbladder). We directly compared the segmentation accuracy of the proposed method to existing deep learning and MALF methods in a cross-validation on a multi-centre data set with 90 subjects. The proposed method yielded significantly higher Dice scores for all organs and lower mean absolute distances for most organs, including Dice scores of 0.78 vs. 0.71, 0.74 and 0.74 for the pancreas, 0.90 vs 0.85, 0.87 and 0.83 for the stomach and 0.76 vs 0.68, 0.69 and 0.66 for the esophagus. We conclude that deep-learning-based segmentation represents a registration-free method for multi-organ abdominal CT segmentation whose accuracy can surpass current methods, potentially supporting image-guided navigation in gastrointestinal endoscopy procedures
    • …
    corecore