119 research outputs found

    Unsupervised Detection of Lesions in Brain MRI using constrained adversarial auto-encoders

    Full text link
    Lesion detection in brain Magnetic Resonance Images (MRI) remains a challenging task. State-of-the-art approaches are mostly based on supervised learning making use of large annotated datasets. Human beings, on the other hand, even non-experts, can detect most abnormal lesions after seeing a handful of healthy brain images. Replicating this capability of using prior information on the appearance of healthy brain structure to detect lesions can help computers achieve human level abnormality detection, specifically reducing the need for numerous labeled examples and bettering generalization of previously unseen lesions. To this end, we study detection of lesion regions in an unsupervised manner by learning data distribution of brain MRI of healthy subjects using auto-encoder based methods. We hypothesize that one of the main limitations of the current models is the lack of consistency in latent representation. We propose a simple yet effective constraint that helps mapping of an image bearing lesion close to its corresponding healthy image in the latent space. We use the Human Connectome Project dataset to learn distribution of healthy-appearing brain MRI and report improved detection, in terms of AUC, of the lesions in the BRATS challenge dataset.Comment: 9 pages, 5 figures, accepted at MIDL 201

    Temporal Interpolation via Motion Field Prediction

    Full text link
    Navigated 2D multi-slice dynamic Magnetic Resonance (MR) imaging enables high contrast 4D MR imaging during free breathing and provides in-vivo observations for treatment planning and guidance. Navigator slices are vital for retrospective stacking of 2D data slices in this method. However, they also prolong the acquisition sessions. Temporal interpolation of navigator slices an be used to reduce the number of navigator acquisitions without degrading specificity in stacking. In this work, we propose a convolutional neural network (CNN) based method for temporal interpolation via motion field prediction. The proposed formulation incorporates the prior knowledge that a motion field underlies changes in the image intensities over time. Previous approaches that interpolate directly in the intensity space are prone to produce blurry images or even remove structures in the images. Our method avoids such problems and faithfully preserves the information in the image. Further, an important advantage of our formulation is that it provides an unsupervised estimation of bi-directional motion fields. We show that these motion fields can be used to halve the number of registrations required during 4D reconstruction, thus substantially reducing the reconstruction time.Comment: Submitted to 1st Conference on Medical Imaging with Deep Learning (MIDL 2018), Amsterdam, The Netherland

    Unsupervised Lesion Detection via Image Restoration with a Normative Prior

    Full text link
    Unsupervised lesion detection is a challenging problem that requires accurately estimating normative distributions of healthy anatomy and detecting lesions as outliers without training examples. Recently, this problem has received increased attention from the research community following the advances in unsupervised learning with deep learning. Such advances allow the estimation of high-dimensional distributions, such as normative distributions, with higher accuracy than previous methods.The main approach of the recently proposed methods is to learn a latent-variable model parameterized with networks to approximate the normative distribution using example images showing healthy anatomy, perform prior-projection, i.e. reconstruct the image with lesions using the latent-variable model, and determine lesions based on the differences between the reconstructed and original images. While being promising, the prior-projection step often leads to a large number of false positives. In this work, we approach unsupervised lesion detection as an image restoration problem and propose a probabilistic model that uses a network-based prior as the normative distribution and detect lesions pixel-wise using MAP estimation. The probabilistic model punishes large deviations between restored and original images, reducing false positives in pixel-wise detections. Experiments with gliomas and stroke lesions in brain MRI using publicly available datasets show that the proposed approach outperforms the state-of-the-art unsupervised methods by a substantial margin, +0.13 (AUC), for both glioma and stroke detection. Extensive model analysis confirms the effectiveness of MAP-based image restoration.Comment: Extended version of 'Unsupervised Lesion Detection via Image Restoration with a Normative Prior' (MIDL2019

    A Non-Rigid Map Fusion-Based RGB-Depth SLAM Method for Endoscopic Capsule Robots

    Full text link
    In the gastrointestinal (GI) tract endoscopy field, ingestible wireless capsule endoscopy is considered as a minimally invasive novel diagnostic technology to inspect the entire GI tract and to diagnose various diseases and pathologies. Since the development of this technology, medical device companies and many groups have made significant progress to turn such passive capsule endoscopes into robotic active capsule endoscopes to achieve almost all functions of current active flexible endoscopes. However, the use of robotic capsule endoscopy still has some challenges. One such challenge is the precise localization of such active devices in 3D world, which is essential for a precise three-dimensional (3D) mapping of the inner organ. A reliable 3D map of the explored inner organ could assist the doctors to make more intuitive and correct diagnosis. In this paper, we propose to our knowledge for the first time in literature a visual simultaneous localization and mapping (SLAM) method specifically developed for endoscopic capsule robots. The proposed RGB-Depth SLAM method is capable of capturing comprehensive dense globally consistent surfel-based maps of the inner organs explored by an endoscopic capsule robot in real time. This is achieved by using dense frame-to-model camera tracking and windowed surfelbased fusion coupled with frequent model refinement through non-rigid surface deformations
    • …
    corecore