29 research outputs found

    Super-Resolution on Rotationally Scanned Photoacoustic Microscopy Images Incorporating Scanning Prior

    Full text link
    Photoacoustic Microscopy (PAM) images integrating the advantages of optical contrast and acoustic resolution have been widely used in brain studies. However, there exists a trade-off between scanning speed and image resolution. Compared with traditional raster scanning, rotational scanning provides good opportunities for fast PAM imaging by optimizing the scanning mechanism. Recently, there is a trend to incorporate deep learning into the scanning process to further increase the scanning speed.Yet, most such attempts are performed for raster scanning while those for rotational scanning are relatively rare. In this study, we propose a novel and well-performing super-resolution framework for rotational scanning-based PAM imaging. To eliminate adjacent rows' displacements due to subject motion or high-frequency scanning distortion,we introduce a registration module across odd and even rows in the preprocessing and incorporate displacement degradation in the training. Besides, gradient-based patch selection is proposed to increase the probability of blood vessel patches being selected for training. A Transformer-based network with a global receptive field is applied for better performance. Experimental results on both synthetic and real datasets demonstrate the effectiveness and generalizability of our proposed framework for rotationally scanned PAM images'super-resolution, both quantitatively and qualitatively. Code is available at https://github.com/11710615/PAMSR.git

    Review of photoacoustic imaging plus X

    Full text link
    Photoacoustic imaging (PAI) is a novel modality in biomedical imaging technology that combines the rich optical contrast with the deep penetration of ultrasound. To date, PAI technology has found applications in various biomedical fields. In this review, we present an overview of the emerging research frontiers on PAI plus other advanced technologies, named as PAI plus X, which includes but not limited to PAI plus treatment, PAI plus new circuits design, PAI plus accurate positioning system, PAI plus fast scanning systems, PAI plus novel ultrasound sensors, PAI plus advanced laser sources, PAI plus deep learning, and PAI plus other imaging modalities. We will discuss each technology's current state, technical advantages, and prospects for application, reported mostly in recent three years. Lastly, we discuss and summarize the challenges and potential future work in PAI plus X area

    Four-dimensional computational ultrasound imaging of brain hemodynamics

    Get PDF
    Four-dimensional ultrasound imaging of complex biological systems such as the brain is technically challenging because of the spatiotemporal sampling requirements. We present computational ultrasound imaging (cUSi), an imaging method that uses complex ultrasound fields that can be generated with simple hardware and a physical wave prediction model to alleviate the sampling constraints. cUSi allows for high-resolution four-dimensional imaging of brain hemodynamics in awake and anesthetized mice.</p

    Learning Tissue Geometries for Photoacoustic Image Analysis

    Get PDF
    Photoacoustic imaging (PAI) holds great promise as a novel, non-ionizing imaging modality, allowing insight into both morphological and physiological tissue properties, which are of particular importance in the diagnostics and therapy of various diseases, such as cancer and cardiovascular diseases. However, the estimation of physiological tissue properties with PAI requires the solution of two inverse problems, one of which, in particular, presents challenges in the form of inherent high dimensionality, potential ill-posedness, and non-linearity. Deep learning (DL) approaches show great potential to address these challenges but typically rely on simulated training data providing ground truth labels, as there are no gold standard methods to infer physiological properties in vivo. The current domain gap between simulated and real photoacoustic (PA) images results in poor in vivo performance and a lack of reliability of models trained with simulated data. Consequently, the estimates of these models occasionally fail to match clinical expectations. The work conducted within the scope of this thesis aimed to improve the applicability of DL approaches to PAI-based tissue parameter estimation by systematically exploring novel data-driven methods to enhance the realism of PA simulations (learning-to-simulate). This thesis is part of a larger research effort, where different factors contributing to PA image formation are disentangled and individually approached with data-driven methods. The specific research focus was placed on generating tissue geometries covering a variety of different tissue types and morphologies, which represent a key component in most PA simulation approaches. Based on in vivo PA measurements (N = 288) obtained in a healthy volunteer study, three data-driven methods were investigated leveraging (1) semantic segmentation, (2) Generative Adversarial Networks (GANs), and (3) scene graphs that encode prior knowledge about the general tissue composition of an image, respectively. The feasibility of all three approaches was successfully demonstrated. First, as a basis for the more advanced approaches, it was shown that tissue geometries can be automatically extracted from PA images through the use of semantic segmentation with two types of discriminative networks and supervised training with manual reference annotations. While this method may replace manual annotation in the future, it does not allow the generation of any number of tissue geometries. In contrast, the GAN-based approach constitutes a generative model that allows the generation of new tissue geometries that closely follow the training data distribution. The plausibility of the generated geometries was successfully demonstrated in a comparative assessment of the performance of a downstream quantification task. A generative model based on scene graphs was developed to gain a deeper understanding of important underlying geometric quantities. Unlike the GAN-based approach, it incorporates prior knowledge about the hierarchical composition of the modeled scene. However, it allowed the generation of plausible tissue geometries and, in parallel, the explicit matching of the distributions of the generated and the target geometric quantities. The training was performed either in analogy to the GAN approach, with target reference annotations, or directly with target PA images, circumventing the need for annotations. While this approach has so far been exclusively conducted in silico, its inherent versatility presents a compelling prospect for the generation of tissue geometries with in vivo reference PA images. In summary, each of the three approaches for generating tissue geometry exhibits distinct strengths and limitations, making their suitability contingent upon the specific application at hand. By opening a new research direction in the form of learning-to-simulate approaches and significantly improving the realistic modeling of tissue geometries and, thus, ultimately, PA simulations, this work lays a crucial foundation for the future use of DL-based quantitative PAI in the clinical setting
    corecore