2 research outputs found
Classifying magnetic resonance image modalities with convolutional neural networks
Magnetic Resonance (MR) imaging allows the acquisition of images with
different contrast properties depending on the acquisition protocol and the
magnetic properties of tissues. Many MR brain image processing techniques, such
as tissue segmentation, require multiple MR contrasts as inputs, and each
contrast is treated differently. Thus it is advantageous to automate the
identification of image contrasts for various purposes, such as facilitating
image processing pipelines, and managing and maintaining large databases via
content-based image retrieval (CBIR). Most automated CBIR techniques focus on a
two-step process: extracting features from data and classifying the image based
on these features. We present a novel 3D deep convolutional neural network
(CNN)-based method for MR image contrast classification. The proposed CNN
automatically identifies the MR contrast of an input brain image volume.
Specifically, we explored three classification problems: (1) identify
T1-weighted (T1-w), T2-weighted (T2-w), and fluid-attenuated inversion recovery
(FLAIR) contrasts, (2) identify pre vs post-contrast T1, (3) identify pre vs
post-contrast FLAIR. A total of 3418 image volumes acquired from multiple sites
and multiple scanners were used. To evaluate each task, the proposed model was
trained on 2137 images and tested on the remaining 1281 images. Results showed
that image volumes were correctly classified with 97.57% accuracy.Comment: Github: https://github.com/sremedios/phine
Which Contrast Does Matter? Towards a Deep Understanding of MR Contrast using Collaborative GAN
Thanks to the recent success of generative adversarial network (GAN) for
image synthesis, there are many exciting GAN approaches that successfully
synthesize MR image contrast from other images with different contrasts. These
approaches are potentially important for image imputation problems, where
complete set of data is often difficult to obtain and image synthesis is one of
the key solutions for handling the missing data problem. Unfortunately, the
lack of the scalability of the existing GAN-based image translation approaches
poses a fundamental challenge to understand the nature of the MR contrast
imputation problem: which contrast does matter? Here, we present a systematic
approach using Collaborative Generative Adversarial Networks (CollaGAN), which
enable the learning of the joint image manifold of multiple MR contrasts to
investigate which contrasts are essential. Our experimental results showed that
the exogenous contrast from contrast agents is not replaceable, but other
endogenous contrast such as T1, T2, etc can be synthesized from other contrast.
These findings may give important guidance to the acquisition protocol design
for MR in real clinical environment.Comment: 32 pages, 6 figure