475 research outputs found

    VesselVAE: Recursive Variational Autoencoders for 3D Blood Vessel Synthesis

    Get PDF
    We present a data-driven generative framework for synthesizing blood vessel 3D geometry. This is a challenging task due to the complexity of vascular systems, which are highly variating in shape, size, and structure. Existing model-based methods provide some degree of control and variation in the structures produced, but fail to capture the diversity of actual anatomical data. We developed VesselVAE, a recursive variational Neural Network that fully exploits the hierarchical organization of the vessel and learns a low-dimensional manifold encoding branch connectivity along with geometry features describing the target surface. After training, the VesselVAE latent space can be sampled to generate new vessel geometries. To the best of our knowledge, this work is the first to utilize this technique for synthesizing blood vessels. We achieve similarities of synthetic and real data for radius (.97), length (.95), and tortuosity (.96). By leveraging the power of deep neural networks, we generate 3D models of blood vessels that are both accurate and diverse, which is crucial for medical and surgical training, hemodynamic simulations, and many other purposes. Keywords: Vascular 3D model

    VesselVAE: Recursive Variational Autoencoders for 3D Blood Vessel Synthesis

    Full text link
    We present a data-driven generative framework for synthesizing blood vessel 3D geometry. This is a challenging task due to the complexity of vascular systems, which are highly variating in shape, size, and structure. Existing model-based methods provide some degree of control and variation in the structures produced, but fail to capture the diversity of actual anatomical data. We developed VesselVAE, a recursive variational Neural Network that fully exploits the hierarchical organization of the vessel and learns a low-dimensional manifold encoding branch connectivity along with geometry features describing the target surface. After training, the VesselVAE latent space can be sampled to generate new vessel geometries. To the best of our knowledge, this work is the first to utilize this technique for synthesizing blood vessels. We achieve similarities of synthetic and real data for radius (.97), length (.95), and tortuosity (.96). By leveraging the power of deep neural networks, we generate 3D models of blood vessels that are both accurate and diverse, which is crucial for medical and surgical training, hemodynamic simulations, and many other purposes.Comment: Accepted for MICCAI 202

    Learned Local Attention Maps for Synthesising Vessel Segmentations

    Full text link
    Magnetic resonance angiography (MRA) is an imaging modality for visualising blood vessels. It is useful for several diagnostic applications and for assessing the risk of adverse events such as haemorrhagic stroke (resulting from the rupture of aneurysms in blood vessels). However, MRAs are not acquired routinely, hence, an approach to synthesise blood vessel segmentations from more routinely acquired MR contrasts such as T1 and T2, would be useful. We present an encoder-decoder model for synthesising segmentations of the main cerebral arteries in the circle of Willis (CoW) from only T2 MRI. We propose a two-phase multi-objective learning approach, which captures both global and local features. It uses learned local attention maps generated by dilating the segmentation labels, which forces the network to only extract information from the T2 MRI relevant to synthesising the CoW. Our synthetic vessel segmentations generated from only T2 MRI achieved a mean Dice score of 0.79±0.030.79 \pm 0.03 in testing, compared to state-of-the-art segmentation networks such as transformer U-Net (0.71±0.040.71 \pm 0.04) and nnU-net(0.68±0.050.68 \pm 0.05), while using only a fraction of the parameters. The main qualitative difference between our synthetic vessel segmentations and the comparative models was in the sharper resolution of the CoW vessel segments, especially in the posterior circulation

    Learning Tissue Geometries for Photoacoustic Image Analysis

    Get PDF
    Photoacoustic imaging (PAI) holds great promise as a novel, non-ionizing imaging modality, allowing insight into both morphological and physiological tissue properties, which are of particular importance in the diagnostics and therapy of various diseases, such as cancer and cardiovascular diseases. However, the estimation of physiological tissue properties with PAI requires the solution of two inverse problems, one of which, in particular, presents challenges in the form of inherent high dimensionality, potential ill-posedness, and non-linearity. Deep learning (DL) approaches show great potential to address these challenges but typically rely on simulated training data providing ground truth labels, as there are no gold standard methods to infer physiological properties in vivo. The current domain gap between simulated and real photoacoustic (PA) images results in poor in vivo performance and a lack of reliability of models trained with simulated data. Consequently, the estimates of these models occasionally fail to match clinical expectations. The work conducted within the scope of this thesis aimed to improve the applicability of DL approaches to PAI-based tissue parameter estimation by systematically exploring novel data-driven methods to enhance the realism of PA simulations (learning-to-simulate). This thesis is part of a larger research effort, where different factors contributing to PA image formation are disentangled and individually approached with data-driven methods. The specific research focus was placed on generating tissue geometries covering a variety of different tissue types and morphologies, which represent a key component in most PA simulation approaches. Based on in vivo PA measurements (N = 288) obtained in a healthy volunteer study, three data-driven methods were investigated leveraging (1) semantic segmentation, (2) Generative Adversarial Networks (GANs), and (3) scene graphs that encode prior knowledge about the general tissue composition of an image, respectively. The feasibility of all three approaches was successfully demonstrated. First, as a basis for the more advanced approaches, it was shown that tissue geometries can be automatically extracted from PA images through the use of semantic segmentation with two types of discriminative networks and supervised training with manual reference annotations. While this method may replace manual annotation in the future, it does not allow the generation of any number of tissue geometries. In contrast, the GAN-based approach constitutes a generative model that allows the generation of new tissue geometries that closely follow the training data distribution. The plausibility of the generated geometries was successfully demonstrated in a comparative assessment of the performance of a downstream quantification task. A generative model based on scene graphs was developed to gain a deeper understanding of important underlying geometric quantities. Unlike the GAN-based approach, it incorporates prior knowledge about the hierarchical composition of the modeled scene. However, it allowed the generation of plausible tissue geometries and, in parallel, the explicit matching of the distributions of the generated and the target geometric quantities. The training was performed either in analogy to the GAN approach, with target reference annotations, or directly with target PA images, circumventing the need for annotations. While this approach has so far been exclusively conducted in silico, its inherent versatility presents a compelling prospect for the generation of tissue geometries with in vivo reference PA images. In summary, each of the three approaches for generating tissue geometry exhibits distinct strengths and limitations, making their suitability contingent upon the specific application at hand. By opening a new research direction in the form of learning-to-simulate approaches and significantly improving the realistic modeling of tissue geometries and, thus, ultimately, PA simulations, this work lays a crucial foundation for the future use of DL-based quantitative PAI in the clinical setting

    Denoising Diffusion Probabilistic Model for Retinal Image Generation and Segmentation

    Full text link
    Experts use retinal images and vessel trees to detect and diagnose various eye, blood circulation, and brain-related diseases. However, manual segmentation of retinal images is a time-consuming process that requires high expertise and is difficult due to privacy issues. Many methods have been proposed to segment images, but the need for large retinal image datasets limits the performance of these methods. Several methods synthesize deep learning models based on Generative Adversarial Networks (GAN) to generate limited sample varieties. This paper proposes a novel Denoising Diffusion Probabilistic Model (DDPM) that outperformed GANs in image synthesis. We developed a Retinal Trees (ReTree) dataset consisting of retinal images, corresponding vessel trees, and a segmentation network based on DDPM trained with images from the ReTree dataset. In the first stage, we develop a two-stage DDPM that generates vessel trees from random numbers belonging to a standard normal distribution. Later, the model is guided to generate fundus images from given vessel trees and random distribution. The proposed dataset has been evaluated quantitatively and qualitatively. Quantitative evaluation metrics include Frechet Inception Distance (FID) score, Jaccard similarity coefficient, Cohen's kappa, Matthew's Correlation Coefficient (MCC), precision, recall, F1-score, and accuracy. We trained the vessel segmentation model with synthetic data to validate our dataset's efficiency and tested it on authentic data. Our developed dataset and source code is available at https://github.com/AAleka/retree.Comment: International Conference on Computational Photography 2023 (ICCP 2023
    corecore