263 research outputs found

    Efficient binary reconstruction for non destructive evaluation using gammagraphy

    Get PDF
    International audienceThe localization and the sizing of 3D flaws within a homogeneous metallic media is a major task for non destructive evaluation (NDE). This paper adresses the problem of the reconstruction of such flaws using an efficient binary algorithm. Basically, the method rests on the fact that a simple binary constraint suffices for an accurate and robust reconstructions in the context of NDE. A heuristic minimization, computationally attractive, is designed in order to provide fast reconstructions. The proposed algorithm is compared with standard binary (the iterated conditional mode algorithm) and non binary (penalized approach with convex potentials Gibbs random fields) reconstruction techniques

    Unsupervised Knowledge-Transfer for Learned Image Reconstruction

    Get PDF
    Deep learning-based image reconstruction approaches have demonstrated impressive empirical performance in many imaging modalities. These approaches generally require a large amount of high-quality training data, which is often not available. To circumvent this issue, we develop a novel unsupervised knowledge-transfer paradigm for learned iterative reconstruction within a Bayesian framework. The proposed approach learns an iterative reconstruction network in two phases. The first phase trains a reconstruction network with a set of ordered pairs comprising of ground truth images and measurement data. The second phase fine-tunes the pretrained network to the measurement data without supervision. Furthermore, the framework delivers uncertainty information over the reconstructed image. We present extensive experimental results on low-dose and sparse-view computed tomography, showing that the proposed framework significantly improves reconstruction quality not only visually, but also quantitatively in terms of PSNR and SSIM, and is competitive with several state-of-the-art supervised and unsupervised reconstruction techniques

    Review on electrical impedance tomography: Artificial intelligence methods and its applications

    Full text link
    © 2019 by the authors. Electrical impedance tomography (EIT) has been a hot topic among researchers for the last 30 years. It is a new imaging method and has evolved over the last few decades. By injecting a small amount of current, the electrical properties of tissues are determined and measurements of the resulting voltages are taken. By using a reconstructing algorithm these voltages then transformed into a tomographic image. EIT contains no identified threats and as compared to magnetic resonance imaging (MRI) and computed tomography (CT) scans (imaging techniques), it is cheaper in cost as well. In this paper, a comprehensive review of efforts and advancements undertaken and achieved in recent work to improve this technology and the role of artificial intelligence to solve this non-linear, ill-posed problem are presented. In addition, a review of EIT clinical based applications has also been presented

    Disparity and Optical Flow Partitioning Using Extended Potts Priors

    Full text link
    This paper addresses the problems of disparity and optical flow partitioning based on the brightness invariance assumption. We investigate new variational approaches to these problems with Potts priors and possibly box constraints. For the optical flow partitioning, our model includes vector-valued data and an adapted Potts regularizer. Using the notation of asymptotically level stable functions we prove the existence of global minimizers of our functionals. We propose a modified alternating direction method of minimizers. This iterative algorithm requires the computation of global minimizers of classical univariate Potts problems which can be done efficiently by dynamic programming. We prove that the algorithm converges both for the constrained and unconstrained problems. Numerical examples demonstrate the very good performance of our partitioning method

    Statistical Fusion of Scientific Images

    Get PDF
    A practical and important class of scientific images are the 2D/3D images obtained from porous materials such as concretes, bone, active carbon, and glass. These materials constitute an important class of heterogeneous media possessing complicated microstructure that is difficult to describe qualitatively. However, they are not totally random and there is a mixture of organization and randomness that makes them difficult to characterize and study. In order to study different properties of porous materials, 2D/3D high resolution samples are required. But obtaining high resolution samples usually requires cutting, polishing and exposure to air, all of which affect the properties of the sample. Moreover, 3D samples obtained by Magnetic Resonance Imaging (MRI) are very low resolution and noisy. Therefore, artificial samples of porous media are required to be generated through a porous media reconstruction process. The recent contributions in the reconstruction task are either only based on a prior model, learned from statistical features of real high resolution training data, and generating samples from that model, or based on a prior model and the measurements. The main objective of this thesis is to some up with a statistical data fusion framework by which different images of porous materials at different resolutions and modalities are combined in order to generate artificial samples of porous media with enhanced resolution. The current super-resolution, multi-resolution and registration methods in image processing fail to provide a general framework for the porous media reconstruction purpose since they are usually based on finding an estimate rather than a typical sample, and also based on having the images from the same scene -- the case which is not true for porous media images. The statistical fusion approach that we propose here is based on a Bayesian framework by which a prior model learned from high resolution samples are combined with a measurement model defined based on the low resolution, coarse-scale information, to come up with a posterior model. We define a measurement model, in the non-hierachical and hierarchical image modeling framework, which describes how the low resolution information is asserted in the posterior model. Then, we propose a posterior sampling approach by which 2D posterior samples of porous media are generated from the posterior model. A more general framework that we propose here is asserting other constraints rather than the measurement in the model and then propose a constrained sampling strategy based on simulated annealing to generate artificial samples

    Finite-frequency tomography with complex body waves

    Get PDF
    Seismische Tomographie ist die eindrücklichste und intuitivste Methode, Informationen über das tiefe Erdinnere, von der Kruste bis an die Kern-Mantel-Grenze zu erlangen. Die von entfernten Erdbeben aufgezeichneten Bodenbewegungen werden mit den für ein einfaches Erdmodell vorhergesagten verglichen, um ein verbessertes Modell zu erhalten. Dieses dreidimensionale Modell kann dann geodynamisch oder tektonisch interpretiert werden. Durch die Entwicklung leistungsfähiger Computersysteme kann die Ausbreitung seismischer Wellen mittlerweile im gesamten messbaren Frequenzbereich simuliert werden, sodass dieses gesamte Spektrum der Tomographie zur Verfügung steht. Die vorliegende Arbeit beschäftigt sich mit der Verbesserung der Wellenformtomographie. Zum einen wird die Nutzbarkeit eines komplexen Typs seismischer Wellen, der in der Mantelübergangszone zwischen 410 und 660 km Tiefe gestreuten triplizierten Wellen ge-zeigt. Diese Wellen versprechen eine erheblich bessere Auflösung der geodynamisch wichtigen Diskontinuitäten zwischen oberem und unterem Mantel als bisher verwendete teleseismische Wellen. Zum anderen wird der nichtlineare Einfluss des Erdbebenmodells auf die Wellenformtomographie untersucht. Mittels Bayesianischer Inferenz werden Wahrscheinlichkeitsdichten für die Herdparameter des Erdbebens, wie Tiefe, Momententensor und Quellfunktion bestimmt. Dazu wird zuvor ein Modell der Messunsicherheit und des Modellierungsfehlers in der Herdinversion bestimmt, das bis dato nicht vorlag. Dabei zeigt sich im Weiteren, dass der Effekt der Unsicherheit im Herdmodell eine nichtlineare und bisher weitgehend ignorierte Feh-lerquelle in der seismischen Tomographie ist. Dieses Ergebnis ermöglicht es, die Varianz seismischer Laufzeit- und Wellenformmessungen sowie die Kovarianz zwischen einzelnen Messstationen zu bestimmen. Die Ergebnisse dieser Arbeit können in Zukunft erheblich dazu beitragen, die Unsicherheiten der seismischen Tomographie quantitativ zu bestimmen, um eventuell vorhandene Artefakte zu zeigen und damit geologischen Fehlinterpretationen tomographischer Ergebnisse vorzubeugen.Seismic tomography is the most impressive method of inferring a picture of the deep interiour of the Earth, from the lower crust to the core mantle boundary. Recordings of ground motions caused by distant earthquakes are used to refine an existing earth model, employing difference between measured and predicted data. The resulting three-dimensional models and images can be interpreted in terms of tectonics and large-scale geodynamics. The increase in computing power in the last decade has lead to an enormous progress in tomographic methods, which can now simulate and therefore exploit the whole frequency range of seismographic measurements. This thesis refines waveform tomography in its flavour of finite-frequency tomography. It first shows that complex wave types, like the those perturbed by the discontinuities in the mantle transition zone can be used for waveform tomography. Using these waves promise an improved resolution of the geodynamically important transition zone compared to the hitherto used teleseismic waves. A second part checks the nonlinear influence of the source model on waveform tomography. By the method of Bayesian inference, probability density functions of the source parameters depth, moment tensor, and the source time function are determined. For that, a model of the measurement uncertainties is necessary, which was hitherto not available and is derived from a large catalogue of source solutions. The results of the probabilistic source inversion allow to quantify the effect of source uncertainty on seismic tomography. This allows to estimate the variance of seismic travel-times and waveforms and also the covariance between different seismographic stations. The results of this work could improve uncertainty estimation in seismic tomography, show potential artifacts in the result and therefore avoid misinterpretation of tomographic images by geologists and others

    Enhancement of noisy planar nuclear medicine images using mean field annealing

    Get PDF
    Abstract Nuclear Medicine (NM) images inherently suffer from large amounts of noise and blur. The purpose of this research is to reduce the noise and blur while maintaining image integrity for improved diagnosis. The proposal is to further improve image quality after the standard pre- and post-processing undertaken by a gamma camera system. Mean Field Annealing (MFA), the image processing technique used in this research is a well known image processing approach. The MFA algorithm uses two techniques to achieve image restoration. Gradient descent is used as the minimisation technique, while a deterministic approximation to Simulated Annealing (SA) is used for optimisation. The algorithm anisotropically diffuses an image, iteratively smoothing regions that are considered non-edges and still preserving edge integrity until a global minimum is obtained. A known advantage of MFA is that it is able to minimise to this global minimum, skipping over local minima while still providing comparable results to SA with significantly less computational effort. Image blur is measured using either a point or line source. Both allow for the derivation of a Point Spread Function (PSF) that is used to de-blur the image. The noise variance can be measured using a flood source. The noise is due to the random fluctuations in the environment as well as other contributors. Noisy blurred NM images can be difficult to diagnose particularly at regions with steep intensity gradients and for this reason MFA is considered suitable for image restoration. From the literature it is evident that MFA can be applied successfully to digital phantom images providing improved performance over Wiener filters. In this paper MFA is shown to yield image enhancement of planar NM images by implementing a sharpening filter as a post MFA processing technique

    Discrete Tomography by Convex-Concave Regularization using Linear and Quadratic Optimization

    Get PDF
    Discrete tomography concerns the reconstruction of objects that are made up from a few different materials, each of which comprising a homogeneous density distribution. Under the assumption that these densities are a priori known new algorithms can be developed which typically need less projection data to reveal appealing reconstruction results

    Estimation of the Image Quality in Emission Tomography: Application to Optimization of SPECT System Design

    Get PDF
    In Emission Tomography the design of the Imaging System has a great influence on the quality of the output image. Optimisation of the system design is a difficult problem due to the computational complexity and to the challenges in its mathematical formulation. In order to compare different system designs, an efficient and effective method to calculate the Image Quality is needed. In this thesis the statistical and deterministic methods for the calculation of the uncertainty in the reconstruction are presented. In the deterministic case, the Fisher Information Matrix (FIM) formalism can be employed to characterize such uncertainty. Unfortunately, computing, storing and inverting the FIM is not feasible with 3D imaging systems. In order to tackle the problem of the computational load in calculating the inverse of the FIM a novel approximation, that relies on a sub-sampling of the FIM, is proposed. The FIM is calculated over a subset of voxels arranged in a grid that covers the whole volume. This formulation reduces the computational complexity in inverting the FIM but nevertheless accounts for the global interdependence between the variables, for the acquisition geometry and for the object dependency. Using this approach, the noise properties as a function of the system geometry parameterisation were investigated for three different cases. In the first study, the design of a parallel-hole collimator for SPECT is optimised. The new method can be applied to evaluating problems like trading-off collimator resolution and sensitivity. In the second study, the reconstructed image quality was evaluated in the case of truncated projection data; showing how the subsampling approach is very accurate for evaluating the effects of missing data. Finally, the noise properties of a D-SPECT system were studied for varying acquisition protocols; showing how the new method is well-suited to problems like optimising adaptive data sampling schemes

    Non-uniform resolution and partial volume recovery in tomographic image reconstruction methods

    Get PDF
    Acquired data in tomographic imaging systems are subject to physical or detector based image degrading effects. These effects need to be considered and modeled in order to optimize resolution recovery. However, accurate modeling of the physics of data and acquisition processes still lead to an ill-posed reconstruction problem, because real data is incomplete and noisy. Real images are always a compromise between resolution and noise; therefore, noise processes also need to be fully considered for optimum bias variance trade off. Image degrading effects and noise are generally modeled in the reconstruction methods, while, statistical iterative methods can better model these effects, with noise processes, as compared to the analytical methods. Regularization is used to condition the problem and explicit regularization methods are considered better to model various noise processes with an extended control over the reconstructed image quality. Emission physics through object distribution properties are modeled in form of a prior function. Smoothing and edge-preserving priors have been investigated in detail and it has been shown that smoothing priors over-smooth images in high count areas and result in spatially non-uniform and nonlinear resolution response. Uniform resolution response is desirable for image comparison and other image processing tasks, such as segmentation and registration. This work proposes methods, based on MRPs in MAP estimators, to obtain images with almost uniform and linear resolution characteristics, using nonlinearity of MRPs as a correction tool. Results indicate that MRPs perform better in terms of response linearity, spatial uniformity and parameter sensitivity, as compared to QPs and TV priors. Hybrid priors, comprised of MRPs and QPs, have been developed and analyzed for their activity recovery performance in two popular PVC methods and for an analysis of list-mode data reconstruction methods showing that MPRs perform better than QPs in different situations
    corecore