8 research outputs found

    Myocardial Infarction Quantification From Late Gadolinium Enhancement MRI Using Top-hat Transforms and Neural Networks

    Full text link
    Significance: Late gadolinium enhanced magnetic resonance imaging (LGE-MRI) is the gold standard technique for myocardial viability assessment. Although the technique accurately reflects the damaged tissue, there is no clinical standard for quantifying myocardial infarction (MI), demanding most algorithms to be expert dependent. Objectives and Methods: In this work a new automatic method for MI quantification from LGE-MRI is proposed. Our novel segmentation approach is devised for accurately detecting not only hyper-enhanced lesions, but also microvascular-obstructed areas. Moreover, it includes a myocardial disease detection step which extends the algorithm for working under healthy scans. The method is based on a cascade approach where firstly, diseased slices are identified by a convolutional neural network (CNN). Secondly, by means of morphological operations a fast coarse scar segmentation is obtained. Thirdly, the segmentation is refined by a boundary-voxel reclassification strategy using an ensemble of CNNs. For its validation, reproducibility and further comparison against other methods, we tested the method on a big multi-field expert annotated LGE-MRI database including healthy and diseased cases. Results and Conclusion: In an exhaustive comparison against nine reference algorithms, the proposal achieved state-of-the-art segmentation performances and showed to be the only method agreeing in volumetric scar quantification with the expert delineations. Moreover, the method was able to reproduce the intra- and inter-observer variability ranges. It is concluded that the method could suitably be transferred to clinical scenarios.Comment: Submitted to IEE

    Convolution layer with nonlinear kernel of square of subtraction for dark-direction-free recognition of images

    Get PDF
    A nonlinear kernel with a bias is proposed here in the convolutional neural network. Negative square of subtraction between input image pixel numbers and the kernel coefficients are convolved to conform new feature map through the convolution layer in convolutional neural network. The operation is nonlinear from the input pixel point of view, as well as from the kernel weight coefficient point of view. Maximum-pooling may follow the feature map, and the results are finally fully connected to the output nodes of the network. While using gradient descent method to train relevant coefficients and biases, the gradient of the square of subtraction term appears in the whole gradient over each kernel coefficient. The new subtraction kernel is applied to two sets of images, and shows better performance than the existing linear convolution kernel. Each coefficient of the nonlinear subtraction kernel has quite image-equivalent meaning on top of pure mathematical number. The subtraction kernel works equally for a given black and white image set and its reversed version or for a given gray image set and its reversed version. This attribute becomes important when patterns are mixed with light color and dark color, or mixed with background color, and still both sides are equally important

    Quantitative Analysis of Patch-Based Fully Convolutional Neural Networks for Tissue Segmentation on Brain Magnetic Resonance Imaging

    No full text

    Automated brain segmentation methods for clinical quality MRI and CT images

    Get PDF
    Alzheimer’s disease (AD) is a progressive neurodegenerative disorder associated with brain tissue loss. Accurate estimation of this loss is critical for the diagnosis, prognosis, and tracking the progression of AD. Structural magnetic resonance imaging (sMRI) and X-ray computed tomography (CT) are widely used imaging modalities that help to in vivo map brain tissue distributions. As manual image segmentations are tedious and time-consuming, automated segmentation methods are increasingly applied to head MRI and head CT images to estimate brain tissue volumes. However, existing automated methods can be applied only to images that have high spatial resolution and their accuracy on heterogeneous low-quality clinical images has not been tested. Further, automated brain tissue segmentation methods for CT are not available, although CT is more widely acquired than MRI in the clinical setting. For these reasons, large clinical imaging archives are unusable for research studies. In this work, we identify and develop automated tissue segmentation and brain volumetry methods that can be applied to clinical quality MRI and CT images. In the first project, we surveyed the current MRI methods and validated the accuracy of these methods when applied to clinical quality images. We then developed CTSeg, a tissue segmentation method for CT images, by adopting the MRI technique that exhibited the highest reliability. CTSeg is an atlas-based statistical modeling method that relies on hand-curated features and cannot be applied to images of subjects with different diseases and age groups. Advanced deep learning-based segmentation methods use hierarchical representations and learn complex features in a data-driven manner. In our final project, we develop a fully automated deep learning segmentation method that uses contextual information to segment clinical quality head CT images. The application of this method on an AD dataset revealed larger differences between brain volumes of AD and control subjects. This dissertation demonstrates the potential of applying automated methods to large clinical imaging archives to answer research questions in a variety of studies

    DEEP LEARNING IN COMPUTER-ASSISTED MAXILLOFACIAL SURGERY

    Get PDF
    corecore