14 research outputs found

    Model based learning for accelerated, limited-view 3D photoacoustic tomography

    Get PDF
    Recent advances in deep learning for tomographic reconstructions have shown great potential to create accurate and high quality images with a considerable speed-up. In this work we present a deep neural network that is specifically designed to provide high resolution 3D images from restricted photoacoustic measurements. The network is designed to represent an iterative scheme and incorporates gradient information of the data fit to compensate for limited view artefacts. Due to the high complexity of the photoacoustic forward operator, we separate training and computation of the gradient information. A suitable prior for the desired image structures is learned as part of the training. The resulting network is trained and tested on a set of segmented vessels from lung CT scans and then applied to in-vivo photoacoustic measurement data

    Simultaneous reconstruction of the initial pressure and sound speed in photoacoustic tomography using a deep-learning approach

    Full text link
    Photoacoustic tomography seeks to reconstruct an acoustic initial pressure distribution from the measurement of the ultrasound waveforms. Conventional methods assume a-prior knowledge of the sound speed distribution, which practically is unknown. One way to circumvent the issue is to simultaneously reconstruct both the acoustic initial pressure and speed. In this article, we develop a novel data-driven method that integrates an advanced deep neural network through model-based iteration. The image of the initial pressure is significantly improved in our numerical simulation

    Uncertainty Estimation using the Local Lipschitz for Deep Learning Image Reconstruction Models

    Full text link
    The use of supervised deep neural network approaches has been investigated to solve inverse problems in all domains, especially radiology where imaging technologies are at the heart of diagnostics. However, in deployment, these models are exposed to input distributions that are widely shifted from training data, due in part to data biases or drifts. It becomes crucial to know whether a given input lies outside the training data distribution before relying on the reconstruction for diagnosis. The goal of this work is three-fold: (i) demonstrate use of the local Lipshitz value as an uncertainty estimation threshold for determining suitable performance, (ii) provide method for identifying out-of-distribution (OOD) images where the model may not have generalized, and (iii) use the local Lipschitz values to guide proper data augmentation through identifying false positives and decrease epistemic uncertainty. We provide results for both MRI reconstruction and CT sparse view to full view reconstruction using AUTOMAP and UNET architectures due to it being pertinent in the medical domain that reconstructed images remain diagnostically accurate

    Deep Learning Based Inversion of Locally Anisotropic Weld Properties from Ultrasonic Array Data

    Get PDF
    The ability to reliably detect and characterise defects embedded in austenitic steel welds depends on prior knowledge of microstructural descriptors, such as the orientations of the weld’s locally anisotropic grain structure. These orientations are usually unknown but it has been shown recently that they can be estimated from ultrasonic scattered wave data. However, conventional algorithms used for solving this inverse problem incur a significant computational cost. In this paper, we propose a framework which uses deep neural networks (DNNs) to reconstruct crystallographic orientations in a welded material from ultrasonic travel time data, in real-time. Acquiring the large amount of training data required for DNNs experimentally is practically infeasible for this problem, therefore a model based training approach is investigated instead, where a simple and efficient analytical method for modelling ultrasonic wave travel times through given weld geometries is implemented. The proposed method is validated by testing the trained networks on data arising from sophisticated finite element simulations of wave propagation through weld microstructures. The trained deep neural network predicts grain orientations to within 3° and in near real-time (0.04 s), presenting a significant step towards realising real-time, accurate characterisation of weld microstructures from ultrasonic non-destructive measurements. The subsequent improvement in defect imaging is then demonstrated via use of the DNN predicted crystallographic orientations to correct the delay laws on which the total focusing method imaging algorithm is based. An improvement of up to 5.3 dB in the signal-to-noise ratio is achieved
    corecore