13 research outputs found

    Neural network image reconstruction for magnetic particle imaging

    Full text link
    We investigate neural network image reconstruction for magnetic particle imaging. The network performance depends strongly on the convolution effects of the spectrum input data. The larger convolution effect appearing at a relatively smaller nanoparticle size obstructs the network training. The trained single-layer network reveals the weighting matrix consisted of a basis vector in the form of Chebyshev polynomials of the second kind. The weighting matrix corresponds to an inverse system matrix, where an incoherency of basis vectors due to a low convolution effects as well as a nonlinear activation function plays a crucial role in retrieving the matrix elements. Test images are well reconstructed through trained networks having an inverse kernel matrix. We also confirm that a multi-layer network with one hidden layer improves the performance. The architecture of a neural network overcoming the low incoherence of the inverse kernel through the classification property will become a better tool for image reconstruction.Comment: 9 pages, 11 figure

    Multispectral Image Fusion using Deep Neural Network A Novel Approach

    Get PDF
    Polychromatic imaging is widely used in various fields including remote sensing, medical imaging, and industrial inspection. However, due to the limitations of imaging sensors, polychromatic images often suffer from low geospatial definition and poor optical fidelity. To address these issues, we propose a novel method for polychromatic image fusion based on deep neuronal system (DNN). Our method involves building a training set of high-definition and low-definition image blocks, utilizing an improved sparse denoising self-encoding encoder learning training neuronal system model for pre-training, and finely adjusting the frameworks of the DNN model. The proposed method is capable of preserving both the high geospatial definition and optical information of the polychromatic image. Experimental results demonstrate that our method outperforms state-of-the-art methods in terms of both objective and subjective quality measures

    Deep Learning-assisted Accurate Defect Reconstruction Using Ultrasonic Guided Waves:一种基于深度学习的超声导波缺陷重构方法

    Get PDF
    Ultrasonic guided wave technology has played a significant role in the field of nondestructive testing due to its advantages of high propagation efficiency and low energy consumption. At present, the existing methods for structural defect detection and quantitative reconstruction of defects by ultrasonic guided waves are mainly derived from the guided wave scattering theory. However, taking into account the high complexity in guided wave scattering problems, assumptions such as Born approximation used to derive theoretical solutions lead to poor quality of the reconstructed results. Other methods, for example, optimizing iteration, improve the accuracy of reconstruction, but the time cost in the process of detection has remarkably increased. To address these issues, a novel approach to quantitative reconstruction of defects based on the integration of convolutional neural network with guided wave scattering theory has been proposed in this paper. The neural network developed by this deep learning-assisted method has the ability to quantitatively predict the reconstruction of defects, reduce the theoretical model error and eliminate the impact of noise pollution in the process of inspection on the accuracy of results. To demonstrate the advantage of the developed method for defect reconstruction, the thinning defect reconstructions in plate have been examined. Results show that this approach has high levels of efficiency and accuracy for reconstruction of defects in structures. Especially, for the reconstruction of the rectangle defect, the result by the proposed method is nearly 200% more accurate than the solution by the method of wavenumber-space transform. For the signals polluted with Gaussian noise, i.e., 15 db, the proposed method can improve the accuracy of reconstruction of defects by 71% as compared with the quality of results by the tradional method of wavenumber-space transform. In practical applications, the integration of theoretical reconstruction models with the neural network technique can provide a useful insight into the high-precision reconstruction of defects in the field of non-destruction testing

    Deep Unfolding with Normalizing Flow Priors for Inverse Problems

    Get PDF
    Many application domains, spanning from computational photography to medical imaging, require recovery of high-fidelity images from noisy, incomplete or partial/compressed measurements. State of the art methods for solving these inverse problems combine deep learning with iterative model-based solvers, a concept known as deep algorithm unfolding. By combining a-priori knowledge of the forward measurement model with learned (proximal) mappings based on deep networks, these methods yield solutions that are both physically feasible (data-consistent) and perceptually plausible. However, current proximal mappings only implicitly learn such image priors. In this paper, we propose to make these image priors fully explicit by embedding deep generative models in the form of normalizing flows within the unfolded proximal gradient algorithm. We demonstrate that the proposed method outperforms competitive baselines on various image recovery tasks, spanning from image denoising to inpainting and deblurring

    Uncertainty Estimation using the Local Lipschitz for Deep Learning Image Reconstruction Models

    Full text link
    The use of supervised deep neural network approaches has been investigated to solve inverse problems in all domains, especially radiology where imaging technologies are at the heart of diagnostics. However, in deployment, these models are exposed to input distributions that are widely shifted from training data, due in part to data biases or drifts. It becomes crucial to know whether a given input lies outside the training data distribution before relying on the reconstruction for diagnosis. The goal of this work is three-fold: (i) demonstrate use of the local Lipshitz value as an uncertainty estimation threshold for determining suitable performance, (ii) provide method for identifying out-of-distribution (OOD) images where the model may not have generalized, and (iii) use the local Lipschitz values to guide proper data augmentation through identifying false positives and decrease epistemic uncertainty. We provide results for both MRI reconstruction and CT sparse view to full view reconstruction using AUTOMAP and UNET architectures due to it being pertinent in the medical domain that reconstructed images remain diagnostically accurate

    Deep Convolutional Neural Network for Inverse Problems in Imaging

    Get PDF
    In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyperparameter selection. The starting point of this paper is the observation that unrolled iterative methods have the form of a CNN (filtering followed by pointwise nonlinearity) when the normal operator (H (H ^{ * } H, where H H ^{ * } is the adjoint of the forward imaging operator, H) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a 512 × 512 image on the GPU
    corecore