5,837 research outputs found

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    A Contrast- and Luminance-Driven Multiscale Netowrk Model of Brightness Perception

    Full text link
    A neural network model of brightness perception is developed to account for a wide variety of data, including the classical phenomenon of Mach bands, low- and high-contrast missing fundamental, luminance staircases, and non-linear contrast effects associated with sinusoidal waveforms. The model builds upon previous work on filling-in models that produce brightness profiles through the interaction of boundary and feature signals. Boundary computations that are sensitive to luminance steps and to continuous lumi- nance gradients are presented. A new interpretation of feature signals through the explicit representation of contrast-driven and luminance-driven information is provided and directly addresses the issue of brightness "anchoring." Computer simulations illustrate the model's competencies.Air Force Office of Scientific Research (F49620-92-J-0334); Northeast Consortium for Engineering Education (NCEE-A303-21-93); Office of Naval Research (N00014-91-J-4100); German BMFT grant (413-5839-01 1N 101 C/1); CNPq and NUTES/UFRJ, Brazi

    An Interpretable Deep Hierarchical Semantic Convolutional Neural Network for Lung Nodule Malignancy Classification

    Full text link
    While deep learning methods are increasingly being applied to tasks such as computer-aided diagnosis, these models are difficult to interpret, do not incorporate prior domain knowledge, and are often considered as a "black-box." The lack of model interpretability hinders them from being fully understood by target users such as radiologists. In this paper, we present a novel interpretable deep hierarchical semantic convolutional neural network (HSCNN) to predict whether a given pulmonary nodule observed on a computed tomography (CT) scan is malignant. Our network provides two levels of output: 1) low-level radiologist semantic features, and 2) a high-level malignancy prediction score. The low-level semantic outputs quantify the diagnostic features used by radiologists and serve to explain how the model interprets the images in an expert-driven manner. The information from these low-level tasks, along with the representations learned by the convolutional layers, are then combined and used to infer the high-level task of predicting nodule malignancy. This unified architecture is trained by optimizing a global loss function including both low- and high-level tasks, thereby learning all the parameters within a joint framework. Our experimental results using the Lung Image Database Consortium (LIDC) show that the proposed method not only produces interpretable lung cancer predictions but also achieves significantly better results compared to common 3D CNN approaches

    Deep Learning Methods for Industry and Healthcare

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Segmentation Of Intracranial Structures From Noncontrast Ct Images With Deep Learning

    Get PDF
    Presented in this work is an investigation of the application of artificially intelligent algorithms, namely deep learning, to generate segmentations for the application in functional avoidance radiotherapy treatment planning. Specific applications of deep learning for functional avoidance include generating hippocampus segmentations from computed tomography (CT) images and generating synthetic pulmonary perfusion images from four-dimensional CT (4DCT).A single institution dataset of 390 patients treated with Gamma Knife stereotactic radiosurgery was created. From these patients, the hippocampus was manually segmented on the high-resolution MR image and used for the development of the data processing methodology and model testing. It was determined that an attention-gated 3D residual network performed the best, with 80.2% of contours meeting the clinical trial acceptability criteria. After having determined the highest performing model architecture, the model was tested on data from the RTOG-0933 Phase II multi-institutional clinical trial for hippocampal avoidance whole brain radiotherapy. From the RTOG-0933 data, an institutional observer (IO) generated contours to compare the deep learning style and the style of the physicians participating in the phase II trial. The deep learning model performance was compared with contour comparison and radiotherapy treatment planning. Results showed that the deep learning contours generated plans comparable to the IO style, but differed significantly from the phase II contours, indicating further investigation is required before this technology can be apply clinically. Additionally, motivated by the observed deviation in contouring styles of the trial’s participating treating physicians, the utility of applying deep learning as a first-pass quality assurance measure was investigated. To simulate a central review, the IO contours were compared to the treating physician contours in attempt to identify unacceptable deviations. The deep learning model was found to have an AUC of 0.80 for left, 0.79 for right hippocampus, thus indicating the potential applications of deep learning as a first-pass quality assurance tool. The methods developed during the hippocampal segmentation task were then translated to the generation of synthetic pulmonary perfusion imaging for use in functional lung avoidance radiotherapy. A clinical data set of 58 pre- and post-radiotherapy SPECT perfusion studies (32 patients) with contemporaneous 4DCT studies were collected. From the data set, 50 studies were used to train a 3D-residual network, with a five-fold validation used to select the highest performing model instances (N=5). The highest performing instances were tested on a 5 patient (8 study) hold-out test set. From these predictions, 50th percentile contours of well-perfused lung were generated and compared to contours from the clinical SPECT perfusion images. On the test set the Spearman correlation coefficient was strong (0.70, IQR: 0.61-0.76) and the functional avoidance contours agreed well Dice of 0.803 (IQR: 0.750-0.810), average surface distance of 5.92 mm (IQR: 5.68-7.55) mm. This study indicates the potential applications of deep learning for the generation of synthetic pulmonary perfusion images but requires an expanded dataset for additional model testing

    AUTOMATED MIDLINE SHIFT DETECTION ON BRAIN CT IMAGES FOR COMPUTER-AIDED CLINICAL DECISION SUPPORT

    Get PDF
    Midline shift (MLS), the amount of displacement of the brain’s midline from its normal symmetric position due to illness or injury, is an important index for clinicians to assess the severity of traumatic brain injury (TBI). In this dissertation, an automated computer-aided midline shift estimation system is proposed. First, a CT slice selection algorithm (SSA) is designed to automatically select a subset of appropriate CT slices from a large number of raw images for MLS detection. Next, ideal midline detection is implemented based on skull bone anatomical features and global rotation assumptions. For the actual midline detection algorithm, a window selection algorithm (WSA) is applied first to confine the region of interest, then the variational level set method is used to segment the image and extract the ventricle contours. With a ventricle identification algorithm (VIA), the position of actual midline is detected based on the identified right and left lateral ventricle contours. Finally, the brain midline shift is calculated using the positions of detected ideal midline and actual midline. One of the important applications of midline shift in clinical medical decision making is to estimate the intracranial pressure (ICP). ICP monitoring is a standard procedure in the care of severe traumatic brain injury (TBI) patients. An automated ICP level prediction model based on machine learning method is proposed in this work. Multiple features, including midline shift, intracranial air cavities, ventricle size, texture patterns, and blood amount, are used in the ICP level prediction. Finally, the results are evaluated to assess the effectiveness of the proposed method in ICP level prediction
    • …
    corecore