6,728 research outputs found

    Robust Registration of Calcium Images by Learned Contrast Synthesis

    Full text link
    Multi-modal image registration is a challenging task that is vital to fuse complementary signals for subsequent analyses. Despite much research into cost functions addressing this challenge, there exist cases in which these are ineffective. In this work, we show that (1) this is true for the registration of in-vivo Drosophila brain volumes visualizing genetically encoded calcium indicators to an nc82 atlas and (2) that machine learning based contrast synthesis can yield improvements. More specifically, the number of subjects for which the registration outright failed was greatly reduced (from 40% to 15%) by using a synthesized image

    Brain Lesion Segmentation through Image Synthesis and Outlier Detection

    Get PDF
    Cerebral small vessel disease (SVD) can manifest in a number of ways. Many of these result in hyperintense regions visible on T2-weighted magnetic resonance (MR) images. The automatic segmentation of these lesions has been the focus of many studies. However, previous methods tended to be limited to certain types of pathology, as a consequence of either restricting the search to the white matter, or by training on an individual pathology. Here we present an unsupervised abnormality detection method which is able to detect abnormally hyperintense regions on FLAIR regardless of the underlying pathology or location. The method uses a combination of image synthesis, Gaussian mixture models and one class support vector machines, and needs only be trained on healthy tissue. We evaluate our method by comparing segmentation results from 127 subjects with SVD with three established methods and report significantly superior performance across a number of metrics

    Automated brain segmentation methods for clinical quality MRI and CT images

    Get PDF
    Alzheimer’s disease (AD) is a progressive neurodegenerative disorder associated with brain tissue loss. Accurate estimation of this loss is critical for the diagnosis, prognosis, and tracking the progression of AD. Structural magnetic resonance imaging (sMRI) and X-ray computed tomography (CT) are widely used imaging modalities that help to in vivo map brain tissue distributions. As manual image segmentations are tedious and time-consuming, automated segmentation methods are increasingly applied to head MRI and head CT images to estimate brain tissue volumes. However, existing automated methods can be applied only to images that have high spatial resolution and their accuracy on heterogeneous low-quality clinical images has not been tested. Further, automated brain tissue segmentation methods for CT are not available, although CT is more widely acquired than MRI in the clinical setting. For these reasons, large clinical imaging archives are unusable for research studies. In this work, we identify and develop automated tissue segmentation and brain volumetry methods that can be applied to clinical quality MRI and CT images. In the first project, we surveyed the current MRI methods and validated the accuracy of these methods when applied to clinical quality images. We then developed CTSeg, a tissue segmentation method for CT images, by adopting the MRI technique that exhibited the highest reliability. CTSeg is an atlas-based statistical modeling method that relies on hand-curated features and cannot be applied to images of subjects with different diseases and age groups. Advanced deep learning-based segmentation methods use hierarchical representations and learn complex features in a data-driven manner. In our final project, we develop a fully automated deep learning segmentation method that uses contextual information to segment clinical quality head CT images. The application of this method on an AD dataset revealed larger differences between brain volumes of AD and control subjects. This dissertation demonstrates the potential of applying automated methods to large clinical imaging archives to answer research questions in a variety of studies

    Deep Lesion Graphs in the Wild: Relationship Learning and Organization of Significant Radiology Image Findings in a Diverse Large-scale Lesion Database

    Full text link
    Radiologists in their daily work routinely find and annotate significant abnormalities on a large number of radiology images. Such abnormalities, or lesions, have collected over years and stored in hospitals' picture archiving and communication systems. However, they are basically unsorted and lack semantic annotations like type and location. In this paper, we aim to organize and explore them by learning a deep feature representation for each lesion. A large-scale and comprehensive dataset, DeepLesion, is introduced for this task. DeepLesion contains bounding boxes and size measurements of over 32K lesions. To model their similarity relationship, we leverage multiple supervision information including types, self-supervised location coordinates and sizes. They require little manual annotation effort but describe useful attributes of the lesions. Then, a triplet network is utilized to learn lesion embeddings with a sequential sampling strategy to depict their hierarchical similarity structure. Experiments show promising qualitative and quantitative results on lesion retrieval, clustering, and classification. The learned embeddings can be further employed to build a lesion graph for various clinically useful applications. We propose algorithms for intra-patient lesion matching and missing annotation mining. Experimental results validate their effectiveness.Comment: Accepted by CVPR2018. DeepLesion url adde

    MR-based attenuation correction and scatter correction in neurological PET/MR imaging with 18F-FDG

    Get PDF
    The aim was to investigate the effects of MR-based attenuation correction (MRAC) and scatter correction to positron emission tomography (PET) image quantification in neurological PET/MR with 18F-FDG. A multi-center phantom study was conducted to investigate the effect of MRAC between PET/MR and PET/CT systems (I). An MRAC method to derive bone from T1-weighted MR images was developed (II, III). Finally, scatter correction accuracy with MRAC was investigated (IV). The results show that the quantitative accuracy in PET is well-comparable be-tween PET/MR and PET/CT systems when an attenuation correction method resembling CT-based attenuation correction (CTAC) is implemented. This al-lows achieving of a PET bias within standard uptake value (SUV) quantification repeatability (< 10 % error) and is within the repeatability of PET in most sys-tems and brain regions (< 5 % error). In addition, MRAC considering soft tissue, air and bone can be derived using T1-weighted images alone. The improved version of the MRAC method allows achieving a quantitative accuracy feasible for advanced applications (< 5 % error). MRAC has a minor effect on the scatter correction accuracy (< 3 % error), even when using MRAC without bone. In conclusion, MRAC can be considered the largest contributing factor to PET quantification bias in 18F-FDG neurological PET/MR. This finding is not explicitly limited only to 18F-FDG imaging. Once an MRAC method that performs close to CTAC is implemented, there is no reason why a PET/MR system would perform differently from a PET/CT system. Such an MRAC method has been developed and is freely available (http://bit.ly/2fx6Jjz). Scatter correction can be considered a non-issue in neurological PET/MR imaging when using 18F-FD

    Unsupervised Lesion Detection via Image Restoration with a Normative Prior

    Full text link
    Unsupervised lesion detection is a challenging problem that requires accurately estimating normative distributions of healthy anatomy and detecting lesions as outliers without training examples. Recently, this problem has received increased attention from the research community following the advances in unsupervised learning with deep learning. Such advances allow the estimation of high-dimensional distributions, such as normative distributions, with higher accuracy than previous methods.The main approach of the recently proposed methods is to learn a latent-variable model parameterized with networks to approximate the normative distribution using example images showing healthy anatomy, perform prior-projection, i.e. reconstruct the image with lesions using the latent-variable model, and determine lesions based on the differences between the reconstructed and original images. While being promising, the prior-projection step often leads to a large number of false positives. In this work, we approach unsupervised lesion detection as an image restoration problem and propose a probabilistic model that uses a network-based prior as the normative distribution and detect lesions pixel-wise using MAP estimation. The probabilistic model punishes large deviations between restored and original images, reducing false positives in pixel-wise detections. Experiments with gliomas and stroke lesions in brain MRI using publicly available datasets show that the proposed approach outperforms the state-of-the-art unsupervised methods by a substantial margin, +0.13 (AUC), for both glioma and stroke detection. Extensive model analysis confirms the effectiveness of MAP-based image restoration.Comment: Extended version of 'Unsupervised Lesion Detection via Image Restoration with a Normative Prior' (MIDL2019

    Converting Neuroimaging Big Data to information: Statistical Frameworks for interpretation of Image Driven Biomarkers and Image Driven Disease Subtyping

    Get PDF
    Large scale clinical trials and population based research studies collect huge amounts of neuroimaging data. Machine learning classifiers can potentially use these data to train models that diagnose brain related diseases from individual brain scans. In this dissertation we address two distinct challenges that beset a wider adoption of these tools for diagnostic purposes. The first challenge that besets the neuroimaging based disease classification is the lack of a statistical inference machinery for highlighting brain regions that contribute significantly to the classifier decisions. In this dissertation, we address this challenge by developing an analytic framework for interpreting support vector machine (SVM) models used for neuroimaging based diagnosis of psychiatric disease. To do this we first note that permutation testing using SVM model components provides a reliable inference mechanism for model interpretation. Then we derive our analysis framework by showing that under certain assumptions, the permutation based null distributions associated with SVM model components can be approximated analytically using the data themselves. Inference based on these analytic null distributions is validated on real and simulated data. p-Values computed from our analysis can accurately identify anatomical features that differentiate groups used for classifier training. Since the majority of clinical and research communities are trained in understanding statistical p-values rather than machine learning techniques like the SVM, we hope that this work will lead to a better understanding SVM classifiers and motivate a wider adoption of SVM models for image based diagnosis of psychiatric disease. A second deficiency of learning based neuroimaging diagnostics is that they implicitly assume that, `a single homogeneous pattern of brain changes drives population wide phenotypic differences\u27. In reality it is more likely that multiple patterns of brain deficits drive the complexities observed in the clinical presentation of most diseases. Understanding this heterogeneity may allow us to build better classifiers for identifying such diseases from individual brain scans. However, analytic tools to explore this heterogeneity are missing. With this in view, we present in this dissertation, a framework for exploring disease heterogeneity using population neuroimaging data. The approach we present first computes difference images by comparing matched cases and controls and then clusters these differences. The cluster centers define a set of deficit patterns that differentiates the two groups. By allowing for more than one pattern of difference between two populations, our framework makes a radical departure from traditional tools used for neuroimaging group analyses. We hope that this leads to a better understanding of the processes that lead to disease and also that it ultimately leads to improved image based disease classifiers
    corecore