38 research outputs found

    Robust deep labeling of radiological emphysema subtypes using squeeze and excitation convolutional neural networks: The MESA Lung and SPIROMICS Studies

    Full text link
    Pulmonary emphysema, the progressive, irreversible loss of lung tissue, is conventionally categorized into three subtypes identifiable on pathology and on lung computed tomography (CT) images. Recent work has led to the unsupervised learning of ten spatially-informed lung texture patterns (sLTPs) on lung CT, representing distinct patterns of emphysematous lung parenchyma based on both textural appearance and spatial location within the lung, and which aggregate into 6 robust and reproducible CT Emphysema Subtypes (CTES). Existing methods for sLTP segmentation, however, are slow and highly sensitive to changes in CT acquisition protocol. In this work, we present a robust 3-D squeeze-and-excitation CNN for supervised classification of sLTPs and CTES on lung CT. Our results demonstrate that this model achieves accurate and reproducible sLTP segmentation on lung CTscans, across two independent cohorts and independently of scanner manufacturer and model

    Optimal input configuration of dynamic contrast enhanced MRI in convolutional neural networks for liver segmentation

    No full text
    Most MRI liver segmentation methods use a structural 3D scan as input, such as a T1 or T2 weighted scan. Segmentation performance may be improved by utilizing both structural and functional information, as contained in dynamic contrast enhanced (DCE) MR series. Dynamic information can be incorporated in a segmentation method based on convolutional neural networks in a number of ways. In this study, the optimal input configuration of DCE MR images for convolutional neural networks (CNNs) is studied. The performance of three different input configurations for CNNs is studied for a liver segmentation task. The three configurations are I) one phase image of the DCE-MR series as input image; II) the separate phases of the DCE-MR as input images; and III) the separate phases of the DCE-MR as channels of one input image. The three input configurations are fed into a dilated fully convolutional network and into a small U-net. The CNNs were trained using 19 annotated DCE-MR series and tested on another 19 annotated DCE-MR series. The performance of the three input configurations for both networks is evaluated against manual annotations. The results show that both neural networks perform better when the separate phases of the DCE-MR series are used as channels of an input image in comparison to one phase as input image or the separate phases as input images. No significant difference between the performances of the two network architectures was found for the separate phases as channels of an input image

    Automatic detection of the region of interest in corneal endothelium images using dense convolutional neural networks

    No full text
    In images of the corneal endothelium (CE) acquired by specular microscopy, endothelial cells are commonly only visible in a part of the image due to varying contrast, mainly caused by challenging imaging conditions as a result of a strongly curved endothelium. In order to estimate the morphometric parameters of the corneal endothelium, the analyses need to be restricted to trustworthy regions - the region of interest (ROI) - where individual cells are discernible. We developed an automatic method to find the ROI by Dense U-nets, a densely connected network of convolutional layers. We tested the method on a heterogeneous dataset of 140 images, which contains a large number of blurred, noisy, and/or out of focus images, where the selection of the ROI for automatic biomarker extraction is vital. By using edge images as input, which can be estimated after retraining the same network, Dense U-net detected the trustworthy areas with an accuracy of 98.94% and an area under the ROC curve (AUC) of 0.998, without being affected by the class imbalance (9:1 in our dataset). After applying the estimated ROI to the edge images, the mean absolute percentage error (MAPE) in the estimated endothelial parameters was 0.80% for ECD, 3.60% for CV, and 2.55% for HEX.</p

    Tissue segmentation in volumetric laser endomicroscopy data using FusionNet and a domain-specific loss function

    No full text
    Volumetric Laser Endomicroscopy (VLE) is a promising balloon-based imaging technique for detecting early neoplasia in Barretts Esophagus. Especially Computer Aided Detection (CAD) techniques show great promise compared to medical doctors, who cannot reliably find disease patterns in the noisy VLE signal. However, an essential pre-processing step for the CAD system is tissue segmentation. At present, tissue is segmented manually but is not scalable for the entire VLE scan consisting of 1,200 frames of 4,096 Ă— 2,048 pixels. Furthermore, the current CAD methods cannot use the VLE scans to their full potential, as only a small segment of the esophagus is selected for further processing, while an automated segmentation system results in significantly more available data. This paper explores the possibility of automatically segmenting relevant tissue for VLE scans using FusionNet and a domain-specific loss function. The contribution of this work is threefold. First, we propose a tissue segmentation algorithm for VLE scans. Second, we introduce a weighted ground truth that exploits the signal-to-noise ratio characteristics of the VLE data. Third, we compare our algorithm segmentation against two additional VLE experts. The results show that our algorithm annotations are indistinguishable from the expert annotations and therefore the algorithm can be used as a preprocessing step for further classification of the tissue

    multi-phase three-dimensional implicit deformable

    No full text
    and quantitative evaluation of brain MRI data with

    Semi-automatic segmentation of JIA-induced inflammation in MRI images of ankle joints

    No full text
    The autoimmune disease Juvenile Idiopathic Arthritis (JIA) affects children of under 16 years and leads to the symptom of inflamed synovial membranes in affected joints. In clinical practice, characteristics of these inflamed membranes are used to stage the disease progression and to predict erosive bone damage. Manual outlining of inflammatory regions in each slide of a MRI dataset is still the gold standard for detection and quantification, however, this process is very tiresome and time-consuming. In addition, the inter- and intra-observer variability is a known problem of human annotators. We have developed the first method to detect inflamed regions in and around major joints in the human ankle. First, we use an adapted coupled shape model framework to segment the ankle bones in a MRI dataset. Based on these segmentations, joints are defined as locations where two bones are particularly close to each other. A number of potential inflammation candidates are generated using multi-level thresholding. Since it is known that inflamed synovial membranes occur in the proximity of joints, we filter out structures with similar intensities such as vessels and tendons sheaths using not only a vesselness filter, but also their distance to the joints and their size. The method has been evaluated on a set of 10 manually annotated clinical MRI datasets and achieved the following results: Precision 0.6785 ± 0.1584, Recall 0.5388 ± 0.1213, DICE 0.5696 ± 0.0976

    Progressively growing convolutional networks for end-to-end deformable image registration

    No full text
    Deformable image registration is often a slow process when using conventional methods. To speed up deformable registration, there is growing interest in using convolutional neural networks. They are comparatively fast and can be trained to estimate full-resolution deformation fields directly from pairs of images. Because deep learning-based registration methods often require rigid or affine pre-registration of the images, they do not perform true end-to-end image registration. To address this, we propose a progressive training method for end-to-end image registration with convolutional networks. The network is first trained to find large deformations at a low resolution using a smaller part of the full architecture. The network is then gradually expanded during training by adding higher resolution layers that allow the network to learn more fine-grained deformations from higher resolution data. By starting at a lower resolution, the network is able to learn larger deformations more quickly at the start of training, making pre-registration redundant. We apply this method to pulmonary CT data, and use it to register inhalation to exhalation images. We train the network using the CREATIS pulmonary CT data set, and apply the trained network to register the DIRLAB pulmonary CT data set. By computing the target registration error at corresponding landmarks we show that the error for end-to-end registration is significantly reduced by using progressive training, while retaining sub-second registration times
    corecore