51 research outputs found
AxonDeepSeg: automatic axon and myelin segmentation from microscopy data using convolutional neural networks
Segmentation of axon and myelin from microscopy images of the nervous system
provides useful quantitative information about the tissue microstructure, such
as axon density and myelin thickness. This could be used for instance to
document cell morphometry across species, or to validate novel non-invasive
quantitative magnetic resonance imaging techniques. Most currently-available
segmentation algorithms are based on standard image processing and usually
require multiple processing steps and/or parameter tuning by the user to adapt
to different modalities. Moreover, only few methods are publicly available. We
introduce AxonDeepSeg, an open-source software that performs axon and myelin
segmentation of microscopic images using deep learning. AxonDeepSeg features:
(i) a convolutional neural network architecture; (ii) an easy training
procedure to generate new models based on manually-labelled data and (iii) two
ready-to-use models trained from scanning electron microscopy (SEM) and
transmission electron microscopy (TEM). Results show high pixel-wise accuracy
across various species: 85% on rat SEM, 81% on human SEM, 95% on mice TEM and
84% on macaque TEM. Segmentation of a full rat spinal cord slice is computed
and morphological metrics are extracted and compared against the literature.
AxonDeepSeg is freely available at https://github.com/neuropoly/axondeepsegComment: 14 pages, 7 figure
Deep-learning based segmentation of challenging myelin sheaths
The segmentation of axons and myelin in electron
microscopy images allows neurologists to highlight the density of
axons and the thickness of the myelin surrounding them. These
properties are of great interest for preventing and anticipating
white matter diseases. This task is generally performed manually,
which is a long and tedious process.
We present an update of the methods used to compute that
segmentation via machine learning. Our model is based on
the architecture of the U-Net network. Our main contribution
consists in using transfer learning in the encoder part of the UNet network, as well as test time augmentation when segmenting.
We use the SE-Resnet50 backbone weights which was pre-trained
on the ImageNet 2012 dataset.
We used a data set of 23 images with the corresponding
segmented masks, which also was challenging due to its extremely
small size. The results show very encouraging performances
compared to the state-of-the-art with an average precision of
92% on the test images. It is also important to note that the
available samples were taken from elderly mices in the corpus
callosum. This represented an additional difficulty, compared to
related works that had samples taken from the spinal cord or
the optic nerve of healthy individuals, with better contours and
less debri
Automatic Axon and Myelin Segmentation of Microscopy Images and Morphometrics Extraction
Dans le système nerveux, la transmission des signaux électriques se fait par
l’intermédiaire des axones de la matière blanche. La plupart de ces axones, aussi connus sous le
nom de fibres nerveuses, sont entourés par la gaine de myéline. Le rôle principal de la gaine de
myéline est d’accroître la vitesse de transmission du signal nerveux le long de l’axone, un
élément crucial pour la communication sur de longues distances. Lors de pathologies
démyélinisantes comme la sclérose en plaques, la gaine de myéline des axones du système
nerveux central est attaquée par des cellules du système immunitaire. Ceci peut conduire à la
dégénérescence de la myéline, qui peut se manifester de diverses façons : une perte du contenu en
myéline, une diminution du nombre d’axones myélinisés ou même des dommages axonaux.
La microscopie à haute résolution des tissus myélinisés offre l’avantage de pouvoir
imager la microstructure du tissu au niveau cellulaire. L’extraction d’information quantitative sur
la morphologie passe par la segmentation des axones et gaines de myélines composant le tissu sur
les images microscopiques acquises. L’extraction de métriques morphologiques des fibres
nerveuses à partir d’image microscopiques pourrait contribuer à plusieurs applications
intéressantes : documentation de la morphométrie sur différentes espèces et tissus, étude des
origines et effets des maladies démyélinisantes, et validation de nouveaux biomarqueurs
d’Imagerie par Résonance Magnétique sensibles au contenu en myéline dans le tissu.
L’objectif principal de ce projet de recherche est de concevoir, implémenter et valider un
framework de segmentation automatique d’axones et de gaines de myéline sur des images
microscopiques et d’en extraire des morphométriques pertinentes. Plusieurs approches de
segmentation ont été explorées dans la littérature, mais la plupart ne sont pas totalement
automatiques, sont conçues pour une modalité de microscopie spécifique, ou bien leur
implémentation n’est pas publiquement disponible pour la communauté scientifique. Deux
frameworks de segmentation ont été développés dans le cadre de ce projet : AxonSeg et
AxonDeepSeg.
Le framework AxonSeg (https://github.com/neuropoly/axonseg) se base sur une approche
de traitement d’image classique pour la segmentation. Le pipeline de segmentation inclut une
transformée de type extended-minima, un modèle d’analyse discriminante combinant des features
de forme et d’intensité, un algorithme de détection de contours et un double algorithme de contours actifs. Le résultat de la segmentation est utilisé pour l’extraction de morphométriques.
La validation du framework a été réalisée sur des échantillons de microscopie optique,
microscopie électronique et microscopie Raman stimulée (CARS).
Le framework AxonDeepSeg (https://github.com/neuropoly/axondeepseg) utilise plutĂ´t
une approche basée sur des réseaux neuronaux convolutifs. Un réseau convolutif a été conçu pour
la segmentation sĂ©mantique des axones myĂ©linisĂ©s. Un modèle de microscopie Ă©lectronique Ă
balayage (MEB) a été entraîné sur des échantillons de moelle épinière de rat et un modèle de
microscopie électronique à transmission (MET) a été entraîné sur des échantillons de corps
calleux de souris. Les deux modèles ont démontré une haute précision pixel par pixel sur les
échantillons test (85% sur le MEB de rat, 81% sur le MEB d’humain, 95% sur le MET de souris,
84% sur le MET de macaque). On démontre également que les modèles entrainés sont robustes
aux ajouts de bruit, au flou et aux changements d’intensité. Le modèle MEB de AxonDeepSeg a
été utilisé pour segmenter une coupe transversale complète de moelle épinière de rat et les
morphométriques extraites à partir des tracts de la matière blanche correspondaient bien aux
tendances rapportées dans la littérature. AxonDeepSeg a démontré une plus grande précision au
niveau de la segmentation lorsque comparé à AxonSeg. Les deux outils logiciels développés sont
open source (licence MIT) et donc à disposition de la communauté scientifique.
Des futures itĂ©rations sont prĂ©vues afin d’amĂ©liorer et d’étendre ce travail. Les objectifs Ă
court terme sont l’entraînement de nouveaux modèles pour d’autres modalités de microscopie,
l’entraînement sur des datasets plus larges afin d’améliorer la généralisation et la robustesse des
modèles, et l’exploration de nouvelles architectures de réseaux neuronaux. De plus, les modèles
de segmentations développés jusqu’à maintenant ont seulement été testés sur des images de tissus
sains. Un développement futur important serait de tester la performance de ces modèles sur des échantillons démyélinisés.----------ABSTRACT
In the nervous system, the transmission of electrical signals is ensured by the axons of the
white matter. A large portion of these axons, also known as nerve fibers, is surrounded by a
myelin sheath. The main role of the myelin sheath is to increase the transmission speed along the
axons, which is crucial for long distance communication. In demyelinating diseases such as
multiple sclerosis, the myelin sheath of the central nervous system is attacked by cells of the
immune system. Myelin degeneration caused by such disorders can manifest itself in different
ways at the microstructural level: loss of myelin content, decrease in the number of myelinated
axons, or even axonal damage.
High resolution microscopy of myelinated tissues can provide in-depth microstructural
information about the tissue under study. Segmentation of the axon and myelin content of a
microscopy image is a necessary step in order to extract quantitative morphological information
from the tissue. Being able to extract morphometrics from the tissue would benefit several
applications: document nerve morphometry across species or tissues, get a better understanding
of the origins of demyelinating diseases, and validate novel magnetic resonance imaging
biomarkers sensitive to myelin content.
The main objective of this research project is to design, implement and validate an
automatic axon and myelin segmentation framework for microscopy images and use it to extract
relevant morphological metrics. Several segmentation approaches exist in the literature for
similar applications, but most of them are not fully automatic, are designed to work on a specific
microscopy modality and/or are not made available to the research community. Two
segmentation frameworks were developed as part of this project: AxonSeg and AxonDeepSeg.
The AxonSeg package (https://github.com/neuropoly/axonseg) uses a segmentation
approach based on standard image processing. The segmentation pipeline includes an extendedminima
transform, a discriminant analysis model based on shape and intensity features, an edge
detection algorithm, and a double active contours step. The segmentation output is used to
compute morphological metrics. Validation of the framework was performed on optical, electron and CARS microscopy.
The AxonDeepSeg package (https://github.com/neuropoly/axondeepseg) uses a
segmentation approach based on convolutional neural networks. A fully convolutional network
architecture was designed for the semantic 3-class segmentation of myelinated axons. A scanning
electron microscopy (SEM) model trained on rat spinal cord samples and a transmission electron
microscopy (TEM) model trained on mice corpus callosum samples are presented. Both models
presented high pixel-wise accuracy on test datasets (85% on rat SEM, 81% on human SEM, 95%
on mice TEM and 84% on macaque TEM). We show that AxonDeepSeg models are robust to
noise, blurring and intensity changes. AxonDeepSeg was used to segment a full rat spinal cord
slice, and morphological metrics extracted from white matter tracks correlated well with the
literature. The AxonDeepSeg framework presented a higher segmentation accuracy when
compared to AxonSeg. Both AxonSeg and AxonDeepSeg are open source (MIT license) and thus
freely available for use by the research community.
Future iterations are planned to improve and extend this work. Training of new models for
other microscopy modalities, training on larger datasets to improve generalization and
robustness, and exploration of novel deep learning architectures are some of the short-term
objectives. Moreover, the current segmentation models have only been tested on healthy tissues.
Another important short-term objective would be to assess the performance of these models on
demyelinated samples
Towards a representative reference for MRI-based human axon radius assessment using light microscopy
Non-invasive assessment of axon radii via MRI bears great potential for clinical and neuroscience research as it is a main determinant of the neuronal conduction velocity. However, there is a lack of representative histological reference data at the scale of the cross-section of MRI voxels for validating the MRI-visible, effective radius (reff). Because the current gold standard stems from neuroanatomical studies designed to estimate the bulk-determined arithmetic mean radius (rarith) on small ensembles of axons, it is unsuited to estimate the tail-weighted reff. We propose CNN-based segmentation on high-resolution, large-scale light microscopy (lsLM) data to generate a representative reference for reff. In a human corpus callosum, we assessed estimation accuracy and bias of rarith and reff. Furthermore, we investigated whether mapping anatomy-related variation of rarith and reff is confounded by low-frequency variation of the image intensity, e.g., due to staining heterogeneity. Finally, we analyzed the error due to outstandingly large axons in reff. Compared to rarith, reff was estimated with higher accuracy (maximum normalized-root-mean-square-error of reff: 8.5 %; rarith: 19.5 %) and lower bias (maximum absolute normalized-mean-bias-error of reff: 4.8 %; rarith: 13.4 %). While rarith was confounded by variation of the image intensity, variation of reff seemed anatomy-related. The largest axons contributed between 0.8 % and 2.9 % to reff. In conclusion, the proposed method is a step towards representatively estimating reff at MRI voxel resolution. Further investigations are required to assess generalization to other brains and brain areas with different axon radii distributions
Automated pipeline for nerve fiber selection and g-ratio calculation in optical microscopy: exploring staining protocol variations
G-ratio is crucial for understanding the nervous system's health and function as it measures the relative myelin thickness around an axon. However, manual measurement is biased and variable, emphasizing the need for an automated and standardized technique. Although deep learning holds promise, current implementations lack clinical relevance and generalizability. This study aimed to develop an automated pipeline for selecting nerve fibers and calculating relevant g-ratio using quality parameters in optical microscopy. Histological sections from the sciatic nerves of 16 female mice were prepared and stained with either p-phenylenediamine (PPD) or toluidine blue (TB). A custom UNet model was trained on a mix of both types of staining to segment the sections based on 7,694 manually delineated nerve fibers. Post-processing excluded non-relevant nerves. Axon diameter, myelin thickness, and g-ratio were computed from the segmentation results and its reliability was assessed using the intraclass correlation coefficient (ICC). Validation was performed on adjacent cuts of the same nerve. Then, morphometrical analyses of both staining techniques were performed. High agreement with the ground truth was shown by the model, with dice scores of 0.86 (axon) and 0.80 (myelin) and pixel-wise accuracy of 0.98 (axon) and 0.94 (myelin). Good inter-device reliability was observed with ICC at 0.87 (g-ratio) and 0.83 (myelin thickness), and an excellent ICC of 0.99 for axon diameter. Although axon diameter significantly differed from the ground truth (p = 0.006), g-ratio (p = 0.098) and myelin thickness (p = 0.877) showed no significant differences. No statistical differences in morphological parameters (g-ratio, myelin thickness, and axon diameter) were found in adjacent cuts of the same nerve (ANOVA p-values: 0.34, 0.34, and 0.39, respectively). Comparing all animals, staining techniques yielded significant differences in mean g-ratio (PPD: 0.48 ± 0.04, TB: 0.50 ± 0.04), myelin thickness (PPD: 0.83 ± 0.28 μm, TB: 0.60 ± 0.20 μm), and axon diameter (PPD: 1.80 ± 0.63 μm, TB: 1.78 ± 0.63 μm). The proposed pipeline automatically selects relevant nerve fibers for g-ratio calculation in optical microscopy. This provides a reliable measurement method and serves as a potential pre-selection approach for large datasets in the context of healthy tissue. It remains to be demonstrated whether this method is applicable to measure g-ratio related with neurological disorders by comparing healthy and pathological tissue. Additionally, our findings emphasize the need for careful interpretation of inter-staining morphological parameters
Segmentation in large-scale cellular electron microscopy with deep learning: A literature survey
Electron microscopy (EM) enables high-resolution imaging of tissues and cells based on 2D and 3D imaging techniques. Due to the laborious and time-consuming nature of manual segmentation of large-scale EM datasets, automated segmentation approaches are crucial. This review focuses on the progress of deep learning-based segmentation techniques in large-scale cellular EM throughout the last six years, during which significant progress has been made in both semantic and instance segmentation. A detailed account is given for the key datasets that contributed to the proliferation of deep learning in 2D and 3D EM segmentation. The review covers supervised, unsupervised, and self-supervised learning methods and examines how these algorithms were adapted to the task of segmenting cellular and sub-cellular structures in EM images. The special challenges posed by such images, like heterogeneity and spatial complexity, and the network architectures that overcame some of them are described. Moreover, an overview of the evaluation measures used to benchmark EM datasets in various segmentation tasks is provided. Finally, an outlook of current trends and future prospects of EM segmentation is given, especially with large-scale models and unlabeled images to learn generic features across EM datasets
AxoNet: A Deep Learning-based Tool to Count Retinal Ganglion Cell Axons
In this work, we develop a robust, extensible tool to automatically and accurately count retinal ganglion cell axons in optic nerve (ON) tissue images from various animal models of glaucoma. We adapted deep learning to regress pixelwise axon count density estimates, which were then integrated over the image area to determine axon counts. The tool, termed AxoNet, was trained and evaluated using a dataset containing images of ON regions randomly selected from whole cross sections of both control and damaged rat ONs and manually annotated for axon count and location. This rat-trained network was then applied to a separate dataset of non-human primate (NHP) ON images. AxoNet was compared to two existing automated axon counting tools, AxonMaster and AxonJ, using both datasets. AxoNet outperformed the existing tools on both the rat and NHP ON datasets as judged by mean absolute error, R2 values when regressing automated vs. manual counts, and Bland-Altman analysis. AxoNet does not rely on hand-crafted image features for axon recognition and is robust to variations in the extent of ON tissue damage, image quality, and species of mammal. Therefore, AxoNet is not species-specific and can be extended to quantify additional ON characteristics in glaucoma and potentially other neurodegenerative diseases.Undergraduat
Exploring variability in medical imaging
Although recent successes of deep learning and novel machine learning techniques improved the perfor-
mance of classification and (anomaly) detection in computer vision problems, the application of these
methods in medical imaging pipeline remains a very challenging task. One of the main reasons for this
is the amount of variability that is encountered and encapsulated in human anatomy and subsequently
reflected in medical images. This fundamental factor impacts most stages in modern medical imaging
processing pipelines.
Variability of human anatomy makes it virtually impossible to build large datasets for each disease
with labels and annotation for fully supervised machine learning. An efficient way to cope with this is
to try and learn only from normal samples. Such data is much easier to collect. A case study of such
an automatic anomaly detection system based on normative learning is presented in this work. We
present a framework for detecting fetal cardiac anomalies during ultrasound screening using generative
models, which are trained only utilising normal/healthy subjects.
However, despite the significant improvement in automatic abnormality detection systems, clinical
routine continues to rely exclusively on the contribution of overburdened medical experts to diagnosis
and localise abnormalities. Integrating human expert knowledge into the medical imaging processing
pipeline entails uncertainty which is mainly correlated with inter-observer variability. From the per-
spective of building an automated medical imaging system, it is still an open issue, to what extent
this kind of variability and the resulting uncertainty are introduced during the training of a model
and how it affects the final performance of the task. Consequently, it is very important to explore the
effect of inter-observer variability both, on the reliable estimation of model’s uncertainty, as well as
on the model’s performance in a specific machine learning task. A thorough investigation of this issue
is presented in this work by leveraging automated estimates for machine learning model uncertainty,
inter-observer variability and segmentation task performance in lung CT scan images.
Finally, a presentation of an overview of the existing anomaly detection methods in medical imaging
was attempted. This state-of-the-art survey includes both conventional pattern recognition methods
and deep learning based methods. It is one of the first literature surveys attempted in the specific
research area.Open Acces
- …