69 research outputs found
Learning Myelin Content in Multiple Sclerosis from Multimodal MRI through Adversarial Training
Multiple sclerosis (MS) is a demyelinating disease of the central nervous
system (CNS). A reliable measure of the tissue myelin content is therefore
essential for the understanding of the physiopathology of MS, tracking
progression and assessing treatment efficacy. Positron emission tomography
(PET) with [^{11} \mbox{C}] \mbox{PIB} has been proposed as a promising
biomarker for measuring myelin content changes in-vivo in MS. However, PET
imaging is expensive and invasive due to the injection of a radioactive tracer.
On the contrary, magnetic resonance imaging (MRI) is a non-invasive, widely
available technique, but existing MRI sequences do not provide, to date, a
reliable, specific, or direct marker of either demyelination or remyelination.
In this work, we therefore propose Sketcher-Refiner Generative Adversarial
Networks (GANs) with specifically designed adversarial loss functions to
predict the PET-derived myelin content map from a combination of MRI
modalities. The prediction problem is solved by a sketch-refinement process in
which the sketcher generates the preliminary anatomical and physiological
information and the refiner refines and generates images reflecting the tissue
myelin content in the human brain. We evaluated the ability of our method to
predict myelin content at both global and voxel-wise levels. The evaluation
results show that the demyelination in lesion regions and myelin content in
normal-appearing white matter (NAWM) can be well predicted by our method. The
method has the potential to become a useful tool for clinical management of
patients with MS.Comment: Accepted by MICCAI201
Applications of Deep Learning Techniques for Automated Multiple Sclerosis Detection Using Magnetic Resonance Imaging: A Review
Multiple Sclerosis (MS) is a type of brain disease which causes visual, sensory, and motor problems for people with a detrimental effect on the functioning of the nervous system. In order to diagnose MS, multiple screening methods have been proposed so far; among them, magnetic resonance imaging (MRI) has received considerable attention among physicians. MRI modalities provide physicians with fundamental information about the structure and function of the brain, which is crucial for the rapid diagnosis of MS lesions. Diagnosing MS using MRI is time-consuming, tedious, and prone to manual errors. Research on the implementation of computer aided diagnosis system (CADS) based on artificial intelligence (AI) to diagnose MS involves conventional machine learning and deep learning (DL) methods. In conventional machine learning, feature extraction, feature selection, and classification steps are carried out by using trial and error; on the contrary, these steps in DL are based on deep layers whose values are automatically learn. In this paper, a complete review of automated MS diagnosis methods performed using DL techniques with MRI neuroimaging modalities is provided. Initially, the steps involved in various CADS proposed using MRI modalities and DL techniques for MS diagnosis are investigated. The important preprocessing techniques employed in various works are analyzed. Most of the published papers on MS diagnosis using MRI modalities and DL are presented. The most significant challenges facing and future direction of automated diagnosis of MS using MRI modalities and DL techniques are also provided
Interpretable and reliable artificial intelligence systems for brain diseases
International audienceIn artificial intelligence for medicine, more interpretable and reliable systems are needed. Here, we report on recent advances toward these aims in the field of brain diseases
ResViT: Residual vision transformers for multi-modal medical image synthesis
Multi-modal imaging is a key healthcare technology that is often
underutilized due to costs associated with multiple separate scans. This
limitation yields the need for synthesis of unacquired modalities from the
subset of available modalities. In recent years, generative adversarial network
(GAN) models with superior depiction of structural details have been
established as state-of-the-art in numerous medical image synthesis tasks. GANs
are characteristically based on convolutional neural network (CNN) backbones
that perform local processing with compact filters. This inductive bias in turn
compromises learning of contextual features. Here, we propose a novel
generative adversarial approach for medical image synthesis, ResViT, to combine
local precision of convolution operators with contextual sensitivity of vision
transformers. ResViT employs a central bottleneck comprising novel aggregated
residual transformer (ART) blocks that synergistically combine convolutional
and transformer modules. Comprehensive demonstrations are performed for
synthesizing missing sequences in multi-contrast MRI, and CT images from MRI.
Our results indicate superiority of ResViT against competing methods in terms
of qualitative observations and quantitative metrics
Harnessing spatial homogeneity of neuroimaging data: patch individual filter layers for CNNs
Neuroimaging data, e.g. obtained from magnetic resonance imaging (MRI), is
comparably homogeneous due to (1) the uniform structure of the brain and (2)
additional efforts to spatially normalize the data to a standard template using
linear and non-linear transformations. Convolutional neural networks (CNNs), in
contrast, have been specifically designed for highly heterogeneous data, such
as natural images, by sliding convolutional filters over different positions in
an image. Here, we suggest a new CNN architecture that combines the idea of
hierarchical abstraction in neural networks with a prior on the spatial
homogeneity of neuroimaging data: Whereas early layers are trained globally
using standard convolutional layers, we introduce for higher, more abstract
layers patch individual filters (PIF). By learning filters in individual image
regions (patches) without sharing weights, PIF layers can learn abstract
features faster and with fewer samples. We thoroughly evaluated PIF layers for
three different tasks and data sets, namely sex classification on UK Biobank
data, Alzheimer's disease detection on ADNI data and multiple sclerosis
detection on private hospital data. We demonstrate that CNNs using PIF layers
result in higher accuracies, especially in low sample size settings, and need
fewer training epochs for convergence. To the best of our knowledge, this is
the first study which introduces a prior on brain MRI for CNN learning
Automatic Axon and Myelin Segmentation of Microscopy Images and Morphometrics Extraction
Dans le système nerveux, la transmission des signaux électriques se fait par
l’intermédiaire des axones de la matière blanche. La plupart de ces axones, aussi connus sous le
nom de fibres nerveuses, sont entourés par la gaine de myéline. Le rôle principal de la gaine de
myéline est d’accroître la vitesse de transmission du signal nerveux le long de l’axone, un
élément crucial pour la communication sur de longues distances. Lors de pathologies
démyélinisantes comme la sclérose en plaques, la gaine de myéline des axones du système
nerveux central est attaquée par des cellules du système immunitaire. Ceci peut conduire à la
dégénérescence de la myéline, qui peut se manifester de diverses façons : une perte du contenu en
myéline, une diminution du nombre d’axones myélinisés ou même des dommages axonaux.
La microscopie à haute résolution des tissus myélinisés offre l’avantage de pouvoir
imager la microstructure du tissu au niveau cellulaire. L’extraction d’information quantitative sur
la morphologie passe par la segmentation des axones et gaines de myélines composant le tissu sur
les images microscopiques acquises. L’extraction de métriques morphologiques des fibres
nerveuses à partir d’image microscopiques pourrait contribuer à plusieurs applications
intéressantes : documentation de la morphométrie sur différentes espèces et tissus, étude des
origines et effets des maladies démyélinisantes, et validation de nouveaux biomarqueurs
d’Imagerie par Résonance Magnétique sensibles au contenu en myéline dans le tissu.
L’objectif principal de ce projet de recherche est de concevoir, implémenter et valider un
framework de segmentation automatique d’axones et de gaines de myéline sur des images
microscopiques et d’en extraire des morphométriques pertinentes. Plusieurs approches de
segmentation ont été explorées dans la littérature, mais la plupart ne sont pas totalement
automatiques, sont conçues pour une modalité de microscopie spécifique, ou bien leur
implémentation n’est pas publiquement disponible pour la communauté scientifique. Deux
frameworks de segmentation ont été développés dans le cadre de ce projet : AxonSeg et
AxonDeepSeg.
Le framework AxonSeg (https://github.com/neuropoly/axonseg) se base sur une approche
de traitement d’image classique pour la segmentation. Le pipeline de segmentation inclut une
transformée de type extended-minima, un modèle d’analyse discriminante combinant des features
de forme et d’intensité, un algorithme de détection de contours et un double algorithme de contours actifs. Le résultat de la segmentation est utilisé pour l’extraction de morphométriques.
La validation du framework a été réalisée sur des échantillons de microscopie optique,
microscopie électronique et microscopie Raman stimulée (CARS).
Le framework AxonDeepSeg (https://github.com/neuropoly/axondeepseg) utilise plutôt
une approche basée sur des réseaux neuronaux convolutifs. Un réseau convolutif a été conçu pour
la segmentation sémantique des axones myélinisés. Un modèle de microscopie électronique à
balayage (MEB) a été entraîné sur des échantillons de moelle épinière de rat et un modèle de
microscopie électronique à transmission (MET) a été entraîné sur des échantillons de corps
calleux de souris. Les deux modèles ont démontré une haute précision pixel par pixel sur les
échantillons test (85% sur le MEB de rat, 81% sur le MEB d’humain, 95% sur le MET de souris,
84% sur le MET de macaque). On démontre également que les modèles entrainés sont robustes
aux ajouts de bruit, au flou et aux changements d’intensité. Le modèle MEB de AxonDeepSeg a
été utilisé pour segmenter une coupe transversale complète de moelle épinière de rat et les
morphométriques extraites à partir des tracts de la matière blanche correspondaient bien aux
tendances rapportées dans la littérature. AxonDeepSeg a démontré une plus grande précision au
niveau de la segmentation lorsque comparé à AxonSeg. Les deux outils logiciels développés sont
open source (licence MIT) et donc à disposition de la communauté scientifique.
Des futures itérations sont prévues afin d’améliorer et d’étendre ce travail. Les objectifs à
court terme sont l’entraînement de nouveaux modèles pour d’autres modalités de microscopie,
l’entraînement sur des datasets plus larges afin d’améliorer la généralisation et la robustesse des
modèles, et l’exploration de nouvelles architectures de réseaux neuronaux. De plus, les modèles
de segmentations développés jusqu’à maintenant ont seulement été testés sur des images de tissus
sains. Un développement futur important serait de tester la performance de ces modèles sur des échantillons démyélinisés.----------ABSTRACT
In the nervous system, the transmission of electrical signals is ensured by the axons of the
white matter. A large portion of these axons, also known as nerve fibers, is surrounded by a
myelin sheath. The main role of the myelin sheath is to increase the transmission speed along the
axons, which is crucial for long distance communication. In demyelinating diseases such as
multiple sclerosis, the myelin sheath of the central nervous system is attacked by cells of the
immune system. Myelin degeneration caused by such disorders can manifest itself in different
ways at the microstructural level: loss of myelin content, decrease in the number of myelinated
axons, or even axonal damage.
High resolution microscopy of myelinated tissues can provide in-depth microstructural
information about the tissue under study. Segmentation of the axon and myelin content of a
microscopy image is a necessary step in order to extract quantitative morphological information
from the tissue. Being able to extract morphometrics from the tissue would benefit several
applications: document nerve morphometry across species or tissues, get a better understanding
of the origins of demyelinating diseases, and validate novel magnetic resonance imaging
biomarkers sensitive to myelin content.
The main objective of this research project is to design, implement and validate an
automatic axon and myelin segmentation framework for microscopy images and use it to extract
relevant morphological metrics. Several segmentation approaches exist in the literature for
similar applications, but most of them are not fully automatic, are designed to work on a specific
microscopy modality and/or are not made available to the research community. Two
segmentation frameworks were developed as part of this project: AxonSeg and AxonDeepSeg.
The AxonSeg package (https://github.com/neuropoly/axonseg) uses a segmentation
approach based on standard image processing. The segmentation pipeline includes an extendedminima
transform, a discriminant analysis model based on shape and intensity features, an edge
detection algorithm, and a double active contours step. The segmentation output is used to
compute morphological metrics. Validation of the framework was performed on optical, electron and CARS microscopy.
The AxonDeepSeg package (https://github.com/neuropoly/axondeepseg) uses a
segmentation approach based on convolutional neural networks. A fully convolutional network
architecture was designed for the semantic 3-class segmentation of myelinated axons. A scanning
electron microscopy (SEM) model trained on rat spinal cord samples and a transmission electron
microscopy (TEM) model trained on mice corpus callosum samples are presented. Both models
presented high pixel-wise accuracy on test datasets (85% on rat SEM, 81% on human SEM, 95%
on mice TEM and 84% on macaque TEM). We show that AxonDeepSeg models are robust to
noise, blurring and intensity changes. AxonDeepSeg was used to segment a full rat spinal cord
slice, and morphological metrics extracted from white matter tracks correlated well with the
literature. The AxonDeepSeg framework presented a higher segmentation accuracy when
compared to AxonSeg. Both AxonSeg and AxonDeepSeg are open source (MIT license) and thus
freely available for use by the research community.
Future iterations are planned to improve and extend this work. Training of new models for
other microscopy modalities, training on larger datasets to improve generalization and
robustness, and exploration of novel deep learning architectures are some of the short-term
objectives. Moreover, the current segmentation models have only been tested on healthy tissues.
Another important short-term objective would be to assess the performance of these models on
demyelinated samples
Is attention all you need in medical image analysis? A review
Medical imaging is a key component in clinical diagnosis, treatment planning
and clinical trial design, accounting for almost 90% of all healthcare data.
CNNs achieved performance gains in medical image analysis (MIA) over the last
years. CNNs can efficiently model local pixel interactions and be trained on
small-scale MI data. The main disadvantage of typical CNN models is that they
ignore global pixel relationships within images, which limits their
generalisation ability to understand out-of-distribution data with different
'global' information. The recent progress of Artificial Intelligence gave rise
to Transformers, which can learn global relationships from data. However, full
Transformer models need to be trained on large-scale data and involve
tremendous computational complexity. Attention and Transformer compartments
(Transf/Attention) which can well maintain properties for modelling global
relationships, have been proposed as lighter alternatives of full Transformers.
Recently, there is an increasing trend to co-pollinate complementary
local-global properties from CNN and Transf/Attention architectures, which led
to a new era of hybrid models. The past years have witnessed substantial growth
in hybrid CNN-Transf/Attention models across diverse MIA problems. In this
systematic review, we survey existing hybrid CNN-Transf/Attention models,
review and unravel key architectural designs, analyse breakthroughs, and
evaluate current and future opportunities as well as challenges. We also
introduced a comprehensive analysis framework on generalisation opportunities
of scientific and clinical impact, based on which new data-driven domain
generalisation and adaptation methods can be stimulated
Automatic detection of pathological regions in medical images
Medical images are an essential tool in the daily clinical routine for the detection, diagnosis, and monitoring of diseases. Different imaging modalities such as magnetic resonance (MR) or X-ray imaging are used to visualize the manifestations of various diseases, providing physicians with valuable information. However, analyzing every single image by human experts is a tedious and laborious task. Deep learning methods have shown great potential to support this process, but many images are needed to train reliable neural networks. Besides the accuracy of the final method, the interpretability of the results is crucial for a deep learning method to be established. A fundamental problem in the medical field is the availability of sufficiently large datasets due to the variability of different imaging techniques and their configurations.
The aim of this thesis is the development of deep learning methods for the automatic identification of anomalous regions in medical images. Each method is tailored to the amount and type of available data. In the first step, we present a fully supervised segmentation method based on denoising diffusion models. This requires a large dataset with pixel-wise manual annotations of the pathological regions. Due to the implicit ensemble characteristic, our method provides uncertainty maps to allow interpretability of the model’s decisions. Manual pixel-wise annotations face the problems that they are prone to human bias, hard to obtain, and often even unavailable. Weakly supervised methods avoid these issues by only relying on image-level annotations. We present two different approaches based on generative models to generate pixel-wise anomaly maps using only image-level annotations, i.e., a generative adversarial network and a denoising diffusion model. Both perform image-to-image translation between a set of healthy and a set of diseased subjects. Pixel-wise anomaly maps can be obtained by computing the difference between the original image of the diseased subject and the synthetic image of its healthy representation. In an extension of the diffusion-based anomaly detection method, we present a flexible framework to solve various image-to-image translation tasks. With this method, we managed to change the size of tumors in MR images, and we were able to add realistic pathologies to images of healthy subjects.
Finally, we focus on a problem frequently occurring when working with MR images: If not enough data from one MR scanner are available, data from other scanners need to be considered. This multi-scanner setting introduces a bias between the datasets of different scanners, limiting the performance of deep learning models. We present a regularization strategy on the model’s latent space to overcome the problems raised by this multi-site setting
- …