36 research outputs found
Machine Learning for Optical Scanning Probe Nanoscopy
The ability to perform nanometer-scale optical imaging and spectroscopy is
key to deciphering the low-energy effects in quantum materials, as well as
vibrational fingerprints in planetary and extraterrestrial particles, catalytic
substances, and aqueous biological samples. The scattering-type scanning
near-field optical microscopy (s-SNOM) technique has recently spread to many
research fields and enabled notable discoveries. In this brief perspective, we
show that the s-SNOM, together with scanning probe research in general, can
benefit in many ways from artificial intelligence (AI) and machine learning
(ML) algorithms. We show that, with the help of AI- and ML-enhanced data
acquisition and analysis, scanning probe optical nanoscopy is poised to become
more efficient, accurate, and intelligent
Reducing time to discovery : materials and molecular modeling, imaging, informatics, and integration
This work was supported by the KAIST-funded Global Singularity Research Program for 2019 and 2020. J.C.A. acknowledges support from the National Science Foundation under Grant TRIPODS + X:RES-1839234 and the Nano/Human Interfaces Presidential Initiative. S.V.K.’s effort was supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), Materials Sciences and Engineering Division and was performed at the Oak Ridge National Laboratory’s Center for Nanophase Materials Sciences (CNMS), a U.S. Department of Energy, Office of Science User Facility.Multiscale and multimodal imaging of material structures and properties provides solid ground on which materials theory and design can flourish. Recently, KAIST announced 10 flagship research fields, which include KAIST Materials Revolution: Materials and Molecular Modeling, Imaging, Informatics and Integration (M3I3). The M3I3 initiative aims to reduce the time for the discovery, design and development of materials based on elucidating multiscale processing-structure-property relationship and materials hierarchy, which are to be quantified and understood through a combination of machine learning and scientific insights. In this review, we begin by introducing recent progress on related initiatives around the globe, such as the Materials Genome Initiative (U.S.), Materials Informatics (U.S.), the Materials Project (U.S.), the Open Quantum Materials Database (U.S.), Materials Research by Information Integration Initiative (Japan), Novel Materials Discovery (E.U.), the NOMAD repository (E.U.), Materials Scientific Data Sharing Network (China), Vom Materials Zur Innovation (Germany), and Creative Materials Discovery (Korea), and discuss the role of multiscale materials and molecular imaging combined with machine learning in realizing the vision of M3I3. Specifically, microscopies using photons, electrons, and physical probes will be revisited with a focus on the multiscale structural hierarchy, as well as structure-property relationships. Additionally, data mining from the literature combined with machine learning will be shown to be more efficient in finding the future direction of materials structures with improved properties than the classical approach. Examples of materials for applications in energy and information will be reviewed and discussed. A case study on the development of a Ni-Co-Mn cathode materials illustrates M3I3's approach to creating libraries of multiscale structure-property-processing relationships. We end with a future outlook toward recent developments in the field of M3I3.Peer reviewe
Fast fluorescence lifetime imaging and sensing via deep learning
Error on title page – year of award is 2023.Fluorescence lifetime imaging microscopy (FLIM) has become a valuable tool in diverse disciplines. This thesis presents deep learning (DL) approaches to addressing two major challenges in FLIM: slow and complex data analysis and the high photon budget for precisely quantifying the fluorescence lifetimes. DL's ability to extract high-dimensional features from data has revolutionized optical and biomedical imaging analysis. This thesis contributes several novel DL FLIM algorithms that significantly expand FLIM's scope.
Firstly, a hardware-friendly pixel-wise DL algorithm is proposed for fast FLIM data analysis. The algorithm has a simple architecture yet can effectively resolve multi-exponential decay models. The calculation speed and accuracy outperform conventional methods significantly.
Secondly, a DL algorithm is proposed to improve FLIM image spatial resolution, obtaining high-resolution (HR) fluorescence lifetime images from low-resolution (LR) images. A computational framework is developed to generate large-scale semi-synthetic FLIM datasets to address the challenge of the lack of sufficient high-quality FLIM datasets. This algorithm offers a practical approach to obtaining HR FLIM images quickly for FLIM systems.
Thirdly, a DL algorithm is developed to analyze FLIM images with only a few photons per pixel, named Few-Photon Fluorescence Lifetime Imaging (FPFLI) algorithm. FPFLI uses spatial correlation and intensity information to robustly estimate the fluorescence lifetime images, pushing this photon budget to a record-low level of only a few photons per pixel.
Finally, a time-resolved flow cytometry (TRFC) system is developed by integrating an advanced CMOS single-photon avalanche diode (SPAD) array and a DL processor. The SPAD array, using a parallel light detection scheme, shows an excellent photon-counting throughput. A quantized convolutional neural network (QCNN) algorithm is designed and implemented on a field-programmable gate array as an embedded processor. The processor resolves fluorescence lifetimes against disturbing noise, showing unparalleled high accuracy, fast analysis speed, and low power consumption.Fluorescence lifetime imaging microscopy (FLIM) has become a valuable tool in diverse disciplines. This thesis presents deep learning (DL) approaches to addressing two major challenges in FLIM: slow and complex data analysis and the high photon budget for precisely quantifying the fluorescence lifetimes. DL's ability to extract high-dimensional features from data has revolutionized optical and biomedical imaging analysis. This thesis contributes several novel DL FLIM algorithms that significantly expand FLIM's scope.
Firstly, a hardware-friendly pixel-wise DL algorithm is proposed for fast FLIM data analysis. The algorithm has a simple architecture yet can effectively resolve multi-exponential decay models. The calculation speed and accuracy outperform conventional methods significantly.
Secondly, a DL algorithm is proposed to improve FLIM image spatial resolution, obtaining high-resolution (HR) fluorescence lifetime images from low-resolution (LR) images. A computational framework is developed to generate large-scale semi-synthetic FLIM datasets to address the challenge of the lack of sufficient high-quality FLIM datasets. This algorithm offers a practical approach to obtaining HR FLIM images quickly for FLIM systems.
Thirdly, a DL algorithm is developed to analyze FLIM images with only a few photons per pixel, named Few-Photon Fluorescence Lifetime Imaging (FPFLI) algorithm. FPFLI uses spatial correlation and intensity information to robustly estimate the fluorescence lifetime images, pushing this photon budget to a record-low level of only a few photons per pixel.
Finally, a time-resolved flow cytometry (TRFC) system is developed by integrating an advanced CMOS single-photon avalanche diode (SPAD) array and a DL processor. The SPAD array, using a parallel light detection scheme, shows an excellent photon-counting throughput. A quantized convolutional neural network (QCNN) algorithm is designed and implemented on a field-programmable gate array as an embedded processor. The processor resolves fluorescence lifetimes against disturbing noise, showing unparalleled high accuracy, fast analysis speed, and low power consumption
Review : Deep learning in electron microscopy
Deep learning is transforming most areas of science and technology, including electron microscopy. This review paper offers a practical perspective aimed at developers with limited familiarity. For context, we review popular applications of deep learning in electron microscopy. Following, we discuss hardware and software needed to get started with deep learning and interface with electron microscopes. We then review neural network components, popular architectures, and their optimization. Finally, we discuss future directions of deep learning in electron microscopy
Modeling and Analysis of Subcellular Protein Localization in Hyper-Dimensional Fluorescent Microscopy Images Using Deep Learning Methods
Hyper-dimensional images are informative and become increasingly common in biomedical research. However, the machine learning methods of studying and processing the hyper-dimensional images are underdeveloped. Most of the methods only model the mapping functions between input and output by focusing on the spatial relationship, whereas neglect the temporal and causal relationships. In many cases, the spatial, temporal, and causal relationships are correlated and become a relationship complex. Therefore, only modeling the spatial relationship may result in inaccurate mapping function modeling and lead to undesired output. Despite the importance, there are multiple challenges on modeling the relationship complex, including the model complexity and the data availability. The objective of this dissertation is to comprehensively study the mapping function modeling of the spatial-temporal and the spatial-temporal-causal relationship in hyper-dimensional data with deep learning approaches. The modeling methods are expected to accurately capture the complex relationships in class-level and object-level so that new image processing tools can be developed based on the methods to study the relationships between targets in hyper-dimensional data. In this dissertation, four different cases of relationship complex are studied, including the class-level spatial-temporal-causal relationship and spatial-temporal relationship modeling, and the object-level spatial-temporal-causal relationship and spatial-temporal relationship modeling. The modelings are achieved by deep learning networks that implicitly model the mapping functions with network weight matrix. For spatial-temporal relationship, because the cause factor information is unavailable, discriminative modeling that only relies on available information is studied. For class-level and object-level spatial-temporal-causal relationship, generative modeling is studied with a new deep learning network and three new tools proposed. For spatial-temporal relationship modeling, a state-of-the-art segmentation network has been found to be the best performer over 18 networks. Based on accurate segmentation, we study the object-level temporal dynamics and interactions through dynamics tracking. The multi-object portion tracking (MOPT) method allows object tracking in subcellular level and identifies object events, including object born, dead, split, and fusion. The tracking results is 2.96% higher on consistent tracking accuracy and 35.48% higher on event identification accuracy, compared with the existing state-of-the-art tracking methods. For spatial-temporal-causal relationship modeling, the proposed four-dimensional reslicing generative adversarial network (4DR-GAN) captures the complex relationships between the input and the target proteins. The experimental results on four groups of proteins demonstrate the efficacy of 4DR-GAN compared with the widely used Pix2Pix network. On protein localization prediction (PLP), the predicted localization from 4DR-GAN is more accurate in subcellular localization, temporal consistency, and dynamics. Based on efficient PLP, the digital activation (DA) and digital inactivation (DI) tools allow precise spatial and temporal control on global and local localization manipulation. They allow researchers to study the protein functions and causal relationships by observing the digital manipulation and PLP output response
Artificial Intelligence in Materials Science: Applications of Machine Learning to Extraction of Physically Meaningful Information from Atomic Resolution Microscopy Imaging
Materials science is the cornerstone for technological development of the modern world that has been largely shaped by the advances in fabrication of semiconductor materials and devices. However, the Moore’s Law is expected to stop by 2025 due to reaching the limits of traditional transistor scaling. However, the classical approach has shown to be unable to keep up with the needs of materials manufacturing, requiring more than 20 years to move a material from discovery to market. To adapt materials fabrication to the needs of the 21st century, it is necessary to develop methods for much faster processing of experimental data and connecting the results to theory, with feedback flow in both directions. However, state-of-the-art analysis remains selective and manual, prone to human error and unable to handle large quantities of data generated by modern equipment. Recent advances in scanning transmission electron and scanning tunneling microscopies have allowed imaging and manipulation of materials on the atomic level, and these capabilities require development of automated, robust, reproducible methods.Artificial intelligence and machine learning have dealt with similar issues in applications to image and speech recognition, autonomous vehicles, and other projects that are beginning to change the world around us. However, materials science faces significant challenges preventing direct application of the such models without taking physical constraints and domain expertise into account.Atomic resolution imaging can generate data that can lead to better understanding of materials and their properties through using artificial intelligence methods. Machine learning, in particular combinations of deep learning and probabilistic modeling, can learn to recognize physical features in imaging, making this process automated and speeding up characterization. By incorporating the knowledge from theory and simulations with such frameworks, it is possible to create the foundation for the automated atomic scale manufacturing
Automatic Axon and Myelin Segmentation of Microscopy Images and Morphometrics Extraction
Dans le système nerveux, la transmission des signaux électriques se fait par
l’intermédiaire des axones de la matière blanche. La plupart de ces axones, aussi connus sous le
nom de fibres nerveuses, sont entourés par la gaine de myéline. Le rôle principal de la gaine de
myéline est d’accroître la vitesse de transmission du signal nerveux le long de l’axone, un
élément crucial pour la communication sur de longues distances. Lors de pathologies
démyélinisantes comme la sclérose en plaques, la gaine de myéline des axones du système
nerveux central est attaquée par des cellules du système immunitaire. Ceci peut conduire à la
dégénérescence de la myéline, qui peut se manifester de diverses façons : une perte du contenu en
myéline, une diminution du nombre d’axones myélinisés ou même des dommages axonaux.
La microscopie à haute résolution des tissus myélinisés offre l’avantage de pouvoir
imager la microstructure du tissu au niveau cellulaire. L’extraction d’information quantitative sur
la morphologie passe par la segmentation des axones et gaines de myélines composant le tissu sur
les images microscopiques acquises. L’extraction de métriques morphologiques des fibres
nerveuses à partir d’image microscopiques pourrait contribuer à plusieurs applications
intéressantes : documentation de la morphométrie sur différentes espèces et tissus, étude des
origines et effets des maladies démyélinisantes, et validation de nouveaux biomarqueurs
d’Imagerie par Résonance Magnétique sensibles au contenu en myéline dans le tissu.
L’objectif principal de ce projet de recherche est de concevoir, implémenter et valider un
framework de segmentation automatique d’axones et de gaines de myéline sur des images
microscopiques et d’en extraire des morphométriques pertinentes. Plusieurs approches de
segmentation ont été explorées dans la littérature, mais la plupart ne sont pas totalement
automatiques, sont conçues pour une modalité de microscopie spécifique, ou bien leur
implémentation n’est pas publiquement disponible pour la communauté scientifique. Deux
frameworks de segmentation ont été développés dans le cadre de ce projet : AxonSeg et
AxonDeepSeg.
Le framework AxonSeg (https://github.com/neuropoly/axonseg) se base sur une approche
de traitement d’image classique pour la segmentation. Le pipeline de segmentation inclut une
transformée de type extended-minima, un modèle d’analyse discriminante combinant des features
de forme et d’intensité, un algorithme de détection de contours et un double algorithme de contours actifs. Le résultat de la segmentation est utilisé pour l’extraction de morphométriques.
La validation du framework a été réalisée sur des échantillons de microscopie optique,
microscopie électronique et microscopie Raman stimulée (CARS).
Le framework AxonDeepSeg (https://github.com/neuropoly/axondeepseg) utilise plutĂ´t
une approche basée sur des réseaux neuronaux convolutifs. Un réseau convolutif a été conçu pour
la segmentation sĂ©mantique des axones myĂ©linisĂ©s. Un modèle de microscopie Ă©lectronique Ă
balayage (MEB) a été entraîné sur des échantillons de moelle épinière de rat et un modèle de
microscopie électronique à transmission (MET) a été entraîné sur des échantillons de corps
calleux de souris. Les deux modèles ont démontré une haute précision pixel par pixel sur les
échantillons test (85% sur le MEB de rat, 81% sur le MEB d’humain, 95% sur le MET de souris,
84% sur le MET de macaque). On démontre également que les modèles entrainés sont robustes
aux ajouts de bruit, au flou et aux changements d’intensité. Le modèle MEB de AxonDeepSeg a
été utilisé pour segmenter une coupe transversale complète de moelle épinière de rat et les
morphométriques extraites à partir des tracts de la matière blanche correspondaient bien aux
tendances rapportées dans la littérature. AxonDeepSeg a démontré une plus grande précision au
niveau de la segmentation lorsque comparé à AxonSeg. Les deux outils logiciels développés sont
open source (licence MIT) et donc à disposition de la communauté scientifique.
Des futures itĂ©rations sont prĂ©vues afin d’amĂ©liorer et d’étendre ce travail. Les objectifs Ă
court terme sont l’entraînement de nouveaux modèles pour d’autres modalités de microscopie,
l’entraînement sur des datasets plus larges afin d’améliorer la généralisation et la robustesse des
modèles, et l’exploration de nouvelles architectures de réseaux neuronaux. De plus, les modèles
de segmentations développés jusqu’à maintenant ont seulement été testés sur des images de tissus
sains. Un développement futur important serait de tester la performance de ces modèles sur des échantillons démyélinisés.----------ABSTRACT
In the nervous system, the transmission of electrical signals is ensured by the axons of the
white matter. A large portion of these axons, also known as nerve fibers, is surrounded by a
myelin sheath. The main role of the myelin sheath is to increase the transmission speed along the
axons, which is crucial for long distance communication. In demyelinating diseases such as
multiple sclerosis, the myelin sheath of the central nervous system is attacked by cells of the
immune system. Myelin degeneration caused by such disorders can manifest itself in different
ways at the microstructural level: loss of myelin content, decrease in the number of myelinated
axons, or even axonal damage.
High resolution microscopy of myelinated tissues can provide in-depth microstructural
information about the tissue under study. Segmentation of the axon and myelin content of a
microscopy image is a necessary step in order to extract quantitative morphological information
from the tissue. Being able to extract morphometrics from the tissue would benefit several
applications: document nerve morphometry across species or tissues, get a better understanding
of the origins of demyelinating diseases, and validate novel magnetic resonance imaging
biomarkers sensitive to myelin content.
The main objective of this research project is to design, implement and validate an
automatic axon and myelin segmentation framework for microscopy images and use it to extract
relevant morphological metrics. Several segmentation approaches exist in the literature for
similar applications, but most of them are not fully automatic, are designed to work on a specific
microscopy modality and/or are not made available to the research community. Two
segmentation frameworks were developed as part of this project: AxonSeg and AxonDeepSeg.
The AxonSeg package (https://github.com/neuropoly/axonseg) uses a segmentation
approach based on standard image processing. The segmentation pipeline includes an extendedminima
transform, a discriminant analysis model based on shape and intensity features, an edge
detection algorithm, and a double active contours step. The segmentation output is used to
compute morphological metrics. Validation of the framework was performed on optical, electron and CARS microscopy.
The AxonDeepSeg package (https://github.com/neuropoly/axondeepseg) uses a
segmentation approach based on convolutional neural networks. A fully convolutional network
architecture was designed for the semantic 3-class segmentation of myelinated axons. A scanning
electron microscopy (SEM) model trained on rat spinal cord samples and a transmission electron
microscopy (TEM) model trained on mice corpus callosum samples are presented. Both models
presented high pixel-wise accuracy on test datasets (85% on rat SEM, 81% on human SEM, 95%
on mice TEM and 84% on macaque TEM). We show that AxonDeepSeg models are robust to
noise, blurring and intensity changes. AxonDeepSeg was used to segment a full rat spinal cord
slice, and morphological metrics extracted from white matter tracks correlated well with the
literature. The AxonDeepSeg framework presented a higher segmentation accuracy when
compared to AxonSeg. Both AxonSeg and AxonDeepSeg are open source (MIT license) and thus
freely available for use by the research community.
Future iterations are planned to improve and extend this work. Training of new models for
other microscopy modalities, training on larger datasets to improve generalization and
robustness, and exploration of novel deep learning architectures are some of the short-term
objectives. Moreover, the current segmentation models have only been tested on healthy tissues.
Another important short-term objective would be to assess the performance of these models on
demyelinated samples
Developing a User-Friendly and Modular Framework for Deep Learning Methods in 3D Bioimage Segmentation
The emergence of deep learning has breathed new life into image analysis, especially for the segmentation, a challenging step required to quantify bidimensional (2D) and tridimensional (3D) objects. Despite deep learning promises, these methods are only slowly spreading in the biological field. In this PhD project, the 3D nucleus of the cell is used as the object of interest to understand how its shape variations contribute to the organisation of the genetic material. First a literature survey showed that very few publicly available methods for 3D nucleus segmentation provide the minimum requirements for their reproducibility. These methods were subsequently benchmarked and only one of them called nnU-Net surpassed the best specialized computer vision tool. Based on these observations, a new development philosophy was designed and, from it, Biom3d, a novel deep learning framework emerged. Biom3d is a user-friendly tool successfully used by biologists involved in 3D nucleus segmentation and provides a new alternative for automatically and accurately computing nuclear shape parameters. Being well optimized, Biom3d also surpasses the performance of cutting-edge methods on a wide variety of biological and medical segmentation problems. Being modular, Biom3d is a sustainable framework compatible with the latest deep learning innovations, such as self-supervised methods. Self-supervision aims at tackling the important need for deep learning methods in manual annotations by pretraining models on large unannotated datasets to extract information first before retraining them on annotated datasets. In this work, a self-supervised approach based on pretraining an entire U-Net model with the Triplet and Arcface losses was developed and demonstrates significant improvements over supervised methods for 3D segmentation. The performance, modularity and interdisciplinary nature of the tools developed during this project will serve as an innovation platform for a wide panel of users ranging from biologist users to future deep learning developers