116 research outputs found

    Machine learning enabled multiple illumination quantitative optoacoustic oximetry imaging in humans.

    Get PDF
    Optoacoustic (OA) imaging is a promising modality for quantifying blood oxygen saturation (sO2) in various biomedical applications - in diagnosis, monitoring of organ function, or even tumor treatment planning. We present an accurate and practically feasible real-time capable method for quantitative imaging of sO2 based on combining multispectral (MS) and multiple illumination (MI) OA imaging with learned spectral decoloring (LSD). For this purpose we developed a hybrid real-time MI MS OA imaging setup with ultrasound (US) imaging capability; we trained gradient boosting machines on MI spectrally colored absorbed energy spectra generated by generic Monte Carlo simulations and used the trained models to estimate sO2 on real OA measurements. We validated MI-LSD in silico and on in vivo image sequences of radial arteries and accompanying veins of five healthy human volunteers. We compared the performance of the method to prior LSD work and conventional linear unmixing. MI-LSD provided highly accurate results in silico and consistently plausible results in vivo. This preliminary study shows a potentially high applicability of quantitative OA oximetry imaging, using our method

    Real-time blood oxygenation tomography with multispectral photoacoustics

    Get PDF
    Multispectral photoacoustics is an emerging biomedical imaging modality which combines the penetration depth and resolution of high frequency medical ultrasonography with an optical absorption contrast. This enables tomographic imaging of blood oxygen saturation, a functional biomarker with wide applications. Already, photoacoustic imaging (PAI) is widely applied for small animal imaging in preclinical research. While PAI is a multiscale modality, its translation to clinical research and interventional use remains challenging. The objective of this thesis was to investigate the usefulness of multispectral PAI as a technique for interventional tomographic imaging of blood oxygenation. This thesis presents open challenges alongside research contributions to address them. These contributions are, (1) The design and implementation of an interventional PAI system, (2) Methods for real-time photoacoustic (PA) image processing and quantification of tissue absorption and blood oxygenation, and finally (3) the application of multispectral PAI to translational neurosurgical research – performing the first high spatiotemporal resolution tomography of spreading depolarization, and at the same time the first interventional PAI on any gyrencephalic (folded) brain. Such interventional imaging in neurology is one of many promising fields of application for PAI

    Compensating for visibility artefacts in photoacoustic imaging with a deep learning approach providing prediction uncertainties

    Full text link
    Conventional photoacoustic imaging may suffer from the limited view and bandwidth of ultrasound transducers. A deep learning approach is proposed to handle these problems and is demonstrated both in simulations and in experiments on a multi-scale model of leaf skeleton. We employed an experimental approach to build the training and the test sets using photographs of the samples as ground truth images. Reconstructions produced by the neural network show a greatly improved image quality as compared to conventional approaches. In addition, this work aimed at quantifying the reliability of the neural network predictions. To achieve this, the dropout Monte-Carlo procedure is applied to estimate a pixel-wise degree of confidence on each predicted picture. Last, we address the possibility to use transfer learning with simulated data in order to drastically limit the size of the experimental dataset.Comment: main text 10 pages + Supplementary materials 6 page

    Data-driven quantitative photoacoustic tomography

    Get PDF
    Spatial information about the 3D distribution of blood oxygen saturation (sO2) in vivo is of clinical interest as it encodes important physiological information about tissue health/pathology. Photoacoustic tomography (PAT) is a biomedical imaging modality that, in principle, can be used to acquire this information. Images are formed by illuminating the sample with a laser pulse where, after multiple scattering events, the optical energy is absorbed. A subsequent rise in temperature induces an increase in pressure (the photoacoustic initial pressure p0) that propagates to the sample surface as an acoustic wave. These acoustic waves are detected as pressure time series by sensor arrays and used to reconstruct images of sample’s p0 distribution. This encodes information about the sample’s absorption distribution, and can be used to estimate sO2. However, an ill-posed nonlinear inverse problem stands in the way of acquiring estimates in vivo. Current approaches to solving this problem fall short of being widely and successfully applied to in vivo tissues due to their reliance on simplifying assumptions about the tissue, prior knowledge of its optical properties, or the formulation of a forward model accurately describing image acquisition with a specific imaging system. Here, we investigate the use of data-driven approaches (deep convolutional networks) to solve this problem. Networks only require a dataset of examples to learn a mapping from PAT data to images of the sO2 distribution. We show the results of training a 3D convolutional network to estimate the 3D sO2 distribution within model tissues from 3D multiwavelength simulated images. However, acquiring a realistic training set to enable successful in vivo application is non-trivial given the challenges associated with estimating ground truth sO2 distributions and the current limitations of simulating training data. We suggest/test several methods to 1) acquire more realistic training data or 2) improve network performance in the absence of adequate quantities of realistic training data. For 1) we describe how training data may be acquired from an organ perfusion system and outline a possible design. Separately, we describe how training data may be generated synthetically using a variant of generative adversarial networks called ambientGANs. For 2), we show how the accuracy of networks trained with limited training data can be improved with self-training. We also demonstrate how the domain gap between training and test sets can be minimised with unsupervised domain adaption to improve quantification accuracy. Overall, this thesis clarifies the advantages of data-driven approaches, and suggests concrete steps towards overcoming the challenges with in vivo application

    Toward accurate quantitative photoacoustic imaging: learning vascular blood oxygen saturation in three dimensions

    Get PDF
    Significance: Two-dimensional (2-D) fully convolutional neural networks have been shown capable of producing maps of sO2 from 2-D simulated images of simple tissue models. However, their potential to produce accurate estimates in vivo is uncertain as they are limited by the 2-D nature of the training data when the problem is inherently three-dimensional (3-D), and they have not been tested with realistic images. Aim: To demonstrate the capability of deep neural networks to process whole 3-D images and output 3-D maps of vascular sO2 from realistic tissue models/images. Approach: Two separate fully convolutional neural networks were trained to produce 3-D maps of vascular blood oxygen saturation and vessel positions from multiwavelength simulated images of tissue models. Results: The mean of the absolute difference between the true mean vessel sO2 and the network output for 40 examples was 4.4% and the standard deviation was 4.5%. Conclusions: 3-D fully convolutional networks were shown capable of producing accurate sO2 maps using the full extent of spatial information contained within 3-D images generated under conditions mimicking real imaging scenarios. We demonstrate that networks can cope with some of the confounding effects present in real images such as limited-view artifacts and have the potential to produce accurate estimates in vivo

    Photoacoustic Imaging, Feature Extraction, and Machine Learning Implementation for Ovarian and Colorectal Cancer Diagnosis

    Get PDF
    Among all cancers related to women’s reproductive systems, ovarian cancer has the highest mortality rate. Pelvic examination, transvaginal ultrasound (TVUS), and blood testing for cancer antigen 125 (CA-125), are the conventional screening tools for ovarian cancer, but they offer very low specificity. Other tools, such as magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET), also have limitations in detecting small lesions. In the USA, considering men and women separately, colorectal cancer is the third most common cause of death related to cancer; for men and women combined, it is the second leading cause of cancer deaths. It is estimated that in 2021, 52,980 deaths due to this cancer will be recorded. The common screening tools for colorectal cancer diagnosis include colonoscopy, biopsy, endoscopic ultrasound (EUS), optical imaging, pelvic MRI, CT, and PET, which all have specific limitations. In this dissertation, we first discuss in-vivo ovarian cancer diagnosis using our coregistered photoacoustic tomography and ultrasound (PAT/US) system. The application of this system is also explored in colorectal cancer diagnosis ex-vivo. Finally, we discuss the capability of our photoacoustic microscopy (PAM) system, complemented by machine learning algorithms, in distinguishing cancerous rectums from normal ones. The dissertation starts with discussing our low-cost phantom construction procedure for pre-clinical experiments and quantitative PAT. This phantom has ultrasound and photoacoustic properties similar to those of human tissue, making it a good candidate for photoacoustic imaging experiments. In-vivo ovarian cancer diagnosis using our PAT/US system is then discussed. We demonstrate extraction of spectral, image, and functional features from our PAT data. These features are then used to distinguish malignant (n=12) from benign ovaries (n=27). An AUC of 0.93 is achieved using our developed SVM classifier. We then explain a sliding multi-pixel method to mitigate the effect of noise on the estimation of functional features from PAT data. This method is tested on 13 malignant and 36 benign ovaries. After that, we demonstrate our two-step optimization method for unmixing the optical absorption (μa) of the tissue from the system response (C) and Grüneisen parameter (Γ) in quantitative PAT (QPAT). Using this method, we calculate the absorption coefficient and functional parameters of five blood tubes, with sO2 values ranging from 24.9% to 97.6%. We then demonstrate the capability of our PAT/US system in monitoring colorectal cancer treatment as well as classifying 13 malignant and 17 normal colon samples. Using PAT features to distinguish these two types of samples (malignant and normal colons), our classifier can achieve an AUC of 0.93. After that, we demonstrate the capability of our coregistered photoacoustic microscopy and ultrasound (PAM/US) system in distinguishing normal from malignant colorectal tissue. It is shown that a convolutional neural network (CNN) significantly outperforms the generalized regression model (GLM) in distinguishing these two types of lesions

    Vulnerable plaques and patients: state-of-the-art

    Get PDF
    Despite advanced understanding of the biology of atherosclerosis, coronary heart disease remains the leading cause of death worldwide. Progress has been challenging as half of the individuals who suffer sudden cardiac death do not experience premonitory symptoms. Furthermore, it is well-recognized that also a plaque that does not cause a haemodynamically significant stenosis can trigger a sudden cardiac event, yet the majority of ruptured or eroded plaques remain clinically silent. In the past 30 years since the term 'vulnerable plaque' was introduced, there have been major advances in the understanding of plaque pathogenesis and pathophysiology, shifting from pursuing features of 'vulnerability' of a specific lesion to the more comprehensive goal of identifying patient 'cardiovascular vulnerability'. It has been also recognized that aside a thin-capped, lipid-rich plaque associated with plaque rupture, acute coronary syndromes (ACS) are also caused by plaque erosion underlying between 25% and 60% of ACS nowadays, by calcified nodule or by functional coronary alterations. While there have been advances in preventive strategies and in pharmacotherapy, with improved agents to reduce cholesterol, thrombosis, and inflammation, events continue to occur in patients receiving optimal medical treatment. Although at present the positive predictive value of imaging precursors of the culprit plaques remains too low for clinical relevance, improving coronary plaque imaging may be instrumental in guiding pharmacotherapy intensity and could facilitate optimal allocation of novel, more aggressive, and costly treatment strategies. Recent technical and diagnostic advances justify continuation of interdisciplinary research efforts to improve cardiovascular prognosis by both systemic and 'local' diagnostics and therapies. The present state-of-the-art document aims to present and critically appraise the latest evidence, developments, and future perspectives in detection, prevention, and treatment of 'high-risk' plaques occurring in 'vulnerable' patients
    • …
    corecore