312 research outputs found
Efficient Registration of Pathological Images: A Joint PCA/Image-Reconstruction Approach
Registration involving one or more images containing pathologies is
challenging, as standard image similarity measures and spatial transforms
cannot account for common changes due to pathologies. Low-rank/Sparse (LRS)
decomposition removes pathologies prior to registration; however, LRS is
memory-demanding and slow, which limits its use on larger data sets.
Additionally, LRS blurs normal tissue regions, which may degrade registration
performance. This paper proposes an efficient alternative to LRS: (1) normal
tissue appearance is captured by principal component analysis (PCA) and (2)
blurring is avoided by an integrated model for pathology removal and image
reconstruction. Results on synthetic and BRATS 2015 data demonstrate its
utility.Comment: Accepted as a conference paper for ISBI 201
Attentive Symmetric Autoencoder for Brain MRI Segmentation
Self-supervised learning methods based on image patch reconstruction have
witnessed great success in training auto-encoders, whose pre-trained weights
can be transferred to fine-tune other downstream tasks of image understanding.
However, existing methods seldom study the various importance of reconstructed
patches and the symmetry of anatomical structures, when they are applied to 3D
medical images. In this paper we propose a novel Attentive Symmetric
Auto-encoder (ASA) based on Vision Transformer (ViT) for 3D brain MRI
segmentation tasks. We conjecture that forcing the auto-encoder to recover
informative image regions can harvest more discriminative representations, than
to recover smooth image patches. Then we adopt a gradient based metric to
estimate the importance of each image patch. In the pre-training stage, the
proposed auto-encoder pays more attention to reconstruct the informative
patches according to the gradient metrics. Moreover, we resort to the prior of
brain structures and develop a Symmetric Position Encoding (SPE) method to
better exploit the correlations between long-range but spatially symmetric
regions to obtain effective features. Experimental results show that our
proposed attentive symmetric auto-encoder outperforms the state-of-the-art
self-supervised learning methods and medical image segmentation models on three
brain MRI segmentation benchmarks.Comment: MICCAI 2022, code:https://github.com/lhaof/AS
Unsupervised Anomaly Localization with Structural Feature-Autoencoders
Unsupervised Anomaly Detection has become a popular method to detect
pathologies in medical images as it does not require supervision or labels for
training. Most commonly, the anomaly detection model generates a "normal"
version of an input image, and the pixel-wise -difference of the two is
used to localize anomalies. However, large residuals often occur due to
imperfect reconstruction of the complex anatomical structures present in most
medical images. This method also fails to detect anomalies that are not
characterized by large intensity differences to the surrounding tissue. We
propose to tackle this problem using a feature-mapping function that transforms
the input intensity images into a space with multiple channels where anomalies
can be detected along different discriminative feature maps extracted from the
original image. We then train an Autoencoder model in this space using
structural similarity loss that does not only consider differences in intensity
but also in contrast and structure. Our method significantly increases
performance on two medical data sets for brain MRI. Code and experiments are
available at https://github.com/FeliMe/feature-autoencoderComment: 10 pages, 5 figures, one table, accepted to the MICCAI 2021 BrainLes
Worksho
Recommended from our members
Improving Patch-Based Convolutional Neural Networks for MRI Brain Tumor Segmentation by Leveraging Location Information.
The manual brain tumor annotation process is time consuming and resource consuming, therefore, an automated and accurate brain tumor segmentation tool is greatly in demand. In this paper, we introduce a novel method to integrate location information with the state-of-the-art patch-based neural networks for brain tumor segmentation. This is motivated by the observation that lesions are not uniformly distributed across different brain parcellation regions and that a locality-sensitive segmentation is likely to obtain better segmentation accuracy. Toward this, we use an existing brain parcellation atlas in the Montreal Neurological Institute (MNI) space and map this atlas to the individual subject data. This mapped atlas in the subject data space is integrated with structural Magnetic Resonance (MR) imaging data, and patch-based neural networks, including 3D U-Net and DeepMedic, are trained to classify the different brain lesions. Multiple state-of-the-art neural networks are trained and integrated with XGBoost fusion in the proposed two-level ensemble method. The first level reduces the uncertainty of the same type of models with different seed initializations, and the second level leverages the advantages of different types of neural network models. The proposed location information fusion method improves the segmentation performance of state-of-the-art networks including 3D U-Net and DeepMedic. Our proposed ensemble also achieves better segmentation performance compared to the state-of-the-art networks in BraTS 2017 and rivals state-of-the-art networks in BraTS 2018. Detailed results are provided on the public multimodal brain tumor segmentation (BraTS) benchmarks
Advanced machine learning methods for oncological image analysis
Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow.
This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis.
The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head- neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy.
Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power.
Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra- dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses.
In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis
Learning Cross-Modality Representations from Multi-Modal Images
Machine learning algorithms can have difficulties adapting to data from different sources, for example from different imaging modalities. We present and analyze three techniques for unsupervised cross-modality feature learning, using a shared autoencoder-like convolutional network that learns a common representation from multi-modal data. We investigate a form of feature normalization, a learning objective that minimizes crossmodality differences, and modality dropout, in which the network is trained with varying subsets of modalities. We measure the same-modality and cross-modality classification accuracies and explore whether the models learn modality-specific or shared features. This paper presents experiments on two public datasets, with knee images from two MRI modalities, provided by the Osteoarthritis Initiative, and brain tumor segmentation on four MRI modalities from the BRATS challenge. All three approaches improved the cross-modality classification accuracy, with modality dropout and per-feature normalization giving the largest improvement. We observed that the networks tend to learn a combination of cross-modality and modality-specific features. Overall, a combination of all three methods produced the most cross-modality features and the highest cross-modality classification accuracy, while maintaining most of the same-modality accuracy
Modality Cycles with Masked Conditional Diffusion for Unsupervised Anomaly Segmentation in MRI
Unsupervised anomaly segmentation aims to detect patterns that are distinct
from any patterns processed during training, commonly called abnormal or
out-of-distribution patterns, without providing any associated manual
segmentations. Since anomalies during deployment can lead to model failure,
detecting the anomaly can enhance the reliability of models, which is valuable
in high-risk domains like medical imaging. This paper introduces Masked
Modality Cycles with Conditional Diffusion (MMCCD), a method that enables
segmentation of anomalies across diverse patterns in multimodal MRI. The method
is based on two fundamental ideas. First, we propose the use of cyclic modality
translation as a mechanism for enabling abnormality detection.
Image-translation models learn tissue-specific modality mappings, which are
characteristic of tissue physiology. Thus, these learned mappings fail to
translate tissues or image patterns that have never been encountered during
training, and the error enables their segmentation. Furthermore, we combine
image translation with a masked conditional diffusion model, which attempts to
`imagine' what tissue exists under a masked area, further exposing unknown
patterns as the generative model fails to recreate them. We evaluate our method
on a proxy task by training on healthy-looking slices of BraTS2021
multi-modality MRIs and testing on slices with tumors. We show that our method
compares favorably to previous unsupervised approaches based on image
reconstruction and denoising with autoencoders and diffusion models.Comment: Accepted in Multiscale Multimodal Medical Imaging workshop in MICCAI
202
- …