5 research outputs found
Estimation of blood oxygenation with learned spectral decoloring for quantitative photoacoustic imaging (LSD-qPAI)
One of the main applications of photoacoustic (PA) imaging is the recovery of
functional tissue properties, such as blood oxygenation (sO2). This is
typically achieved by linear spectral unmixing of relevant chromophores from
multispectral photoacoustic images. Despite the progress that has been made
towards quantitative PA imaging (qPAI), most sO2 estimation methods yield poor
results in realistic settings. In this work, we tackle the challenge by
employing learned spectral decoloring for quantitative photoacoustic imaging
(LSD-qPAI) to obtain quantitative estimates for blood oxygenation. LSD-qPAI
computes sO2 directly from pixel-wise initial pressure spectra Sp0, which are
vectors comprised of the initial pressure at the same spatial location over all
recorded wavelengths. Initial results suggest that LSD-qPAI is able to obtain
accurate sO2 estimates directly from multispectral photoacoustic measurements
in silico and plausible estimates in vivo.Comment: 5 page
absO2luteU-Net: Tissue Oxygenation Calculation Using Photoacoustic Imaging and Convolutional Neural Networks
Photoacoustic (PA) imaging uses incident light to generate ultrasound signals within tissues. Using PA imaging to accurately measure hemoglobin concentration and calculate oxygenation (sO2) requires prior tissue knowledge and costly computational methods. However, this thesis shows that machine learning algorithms can accurately and quickly estimate sO2. absO2luteU-Net, a convolutional neural network, was trained on Monte Carlo simulated multispectral PA data and predicted sO2 with higher accuracy compared to simple linear unmixing, suggesting machine learning can solve the fluence estimation problem. This project was funded by the Kaminsky Family Fund and the Neukom Institute
Towards accurate quantitative photoacoustic imaging: learning vascular blood oxygen saturation in 3D
Significance: 2D fully convolutional neural networks have been shown capable
of producing maps of sO from 2D simulated images of simple tissue models.
However, their potential to produce accurate estimates in vivo is uncertain as
they are limited by the 2D nature of the training data when the problem is
inherently 3D, and they have not been tested with realistic images.
Aim: To demonstrate the capability of deep neural networks to process whole
3D images and output 3D maps of vascular sO from realistic tissue
models/images.
Approach: Two separate fully convolutional neural networks were trained to
produce 3D maps of vascular blood oxygen saturation and vessel positions from
multiwavelength simulated images of tissue models.
Results: The mean of the absolute difference between the true mean vessel
sO and the network output for 40 examples was 4.4% and the standard
deviation was 4.5%.
Conclusions: 3D fully convolutional networks were shown capable of producing
accurate sO maps using the full extent of spatial information contained
within 3D images generated under conditions mimicking real imaging scenarios.
This work demonstrates that networks can cope with some of the confounding
effects present in real images such as limited-view artefacts, and have the
potential to produce accurate estimates in vivo
Deep learning for photoacoustic imaging: a survey
Machine learning has been developed dramatically and witnessed a lot of
applications in various fields over the past few years. This boom originated in
2009, when a new model emerged, that is, the deep artificial neural network,
which began to surpass other established mature models on some important
benchmarks. Later, it was widely used in academia and industry. Ranging from
image analysis to natural language processing, it fully exerted its magic and
now become the state-of-the-art machine learning models. Deep neural networks
have great potential in medical imaging technology, medical data analysis,
medical diagnosis and other healthcare issues, and is promoted in both
pre-clinical and even clinical stages. In this review, we performed an overview
of some new developments and challenges in the application of machine learning
to medical image analysis, with a special focus on deep learning in
photoacoustic imaging. The aim of this review is threefold: (i) introducing
deep learning with some important basics, (ii) reviewing recent works that
apply deep learning in the entire ecological chain of photoacoustic imaging,
from image reconstruction to disease diagnosis, (iii) providing some open
source materials and other resources for researchers interested in applying
deep learning to photoacoustic imaging.Comment: A review of deep learning for photoacoustic imagin
Deep learning for biomedical photoacoustic imaging: A review
Photoacoustic imaging (PAI) is a promising emerging imaging modality that
enables spatially resolved imaging of optical tissue properties up to several
centimeters deep in tissue, creating the potential for numerous exciting
clinical applications. However, extraction of relevant tissue parameters from
the raw data requires the solving of inverse image reconstruction problems,
which have proven extremely difficult to solve. The application of deep
learning methods has recently exploded in popularity, leading to impressive
successes in the context of medical imaging and also finding first use in the
field of PAI. Deep learning methods possess unique advantages that can
facilitate the clinical translation of PAI, such as extremely fast computation
times and the fact that they can be adapted to any given problem. In this
review, we examine the current state of the art regarding deep learning in PAI
and identify potential directions of research that will help to reach the goal
of clinical applicabilityComment: 31 pages, 8 figures, 3 tables, 169 reference