1,073 research outputs found

    MCMC and variational approaches for Bayesian inversion in diffraction imaging

    No full text
    International audienceThe term “diffraction imaging” is meant, herein, in the sense of an “inverse scattering problem” where the goal is to build up an image of an unknown object from measurements of the scattered field that results from its interaction with a known probing wave. This type of problem occurs in many imaging and non-destructive testing applications. It corresponds to the situation where looking for a good trade-off between the image resolution and the penetration of the incident wave in the probed medium, leads to choosing the frequency of the latter in such a way that its wavelength lies in the “resonance” domain, in the sense that it is approximately of the same order of magnitude as the characteristic dimensions of the inhomogeneities of the inspected object. In this situation the wave-object interaction gives rise to important diffraction phenomena. This is the case for the two applications considered herein, where the interrogating waves are electromagnetic waves with wavelengths in the microwave and optical domains, whereas the characteristic dimensions of the sought object are 1 cm and 1 μm, respectively.The solution of an inverse problem obviously requires previous construction of a forward model that expresses the scattered field as a function of the parameters of the sought object. In this model, diffraction phenomena are taken into account by means of domain integral representations of the electric fields. The forward model is then described by two coupled integral equations, whose discrete versions are obtained using a method of moments and whose inversion leads to a non-linear problem.Concerning inversion, at the beginning of the 1980s, accounting for the diffraction phenomena has been the subject of much attention in the field of acoustic imaging for applications in geophysics, non-destructive testing or biomedical imaging. It led to techniques such as diffraction tomography, a term that denotes “applications that employs diffracting wavefields in the tomographic reconstruction process” , but which generally implies reconstruction processes based on the generalized projection-slice theorem, an extension to the diffraction case of the projection-slice theorem of the classical computed tomography whose forward model is given by a Radon transform . This theorem is based upon first- order linearizing assumptions such as the Born’s or Rytov’s approximations. So, the term diffraction tomography was paradoxically used to describe reconstruction techniques adapted to weakly scattering environments that do not provide quantitative information on highly contrasted dielectric objects such as those encountered in the applications considered herein, where multiple diffraction cannot be ignored.Furthermore, the resolution of these techniques is limited because evanescent waves are not taken into consideration. These limitations have led researchers to develop inversion algorithms able to deal with non-linear problems, at the beginning of the 1990s for microwave imaging and more recently for optical imaging. Many studies have focused on the development of deterministic methods, such as the Newton-Kantorovich algorithm, the modified gradient method (MGM) or the contrast-source inversion technique (CSI), where the solution is sought for by means of an iterative minimization by a gradient method of a cost functional that expresses the difference between the scattered field and the estimated model output. But, in addition to be non-linear, inverse scattering problems are also known to be ill-posed, which means that their resolution requires a regularization which generally consists in introducing prior information on the sought object. In the present case, for example, we look for man-made objects that are composed of homogeneous and compact regions made of a finite number of different materials, and with the aforementioned deterministic methods, it is not easy to take into account such prior information because it must be introduced into the cost functional to be minimized.On the contrary, the probabilistic framework of Bayesian estimation, basis of the model presented herein, is especially well suited for this situation. Prior information is appropriately introduced via a probabilistic Gauss-Markov-Potts model. The marginal contrast distribution is modeled as a mixture of Gaussians, where each Gaussian distribution represents a class of materials and the compactness of the regions is taken into account using a hidden Markov model. Estimation of the unknowns and parameters introduced into the prior model is performed via an unsupervised joint approach.Two iterative algorithms are proposed. The first one, denoted as the MCMC algorithm (Monte-Carlo Markov Chain), is rather classic ; it consists in expressing all the joint posterior or conditional distributions of all the unknowns and, then, using a Gibbs sampling algorithm for estimating the posterior mean of the unknowns. This algorithm yields good results, however, it is computationally intensive mainly because Gibbs sampling requires a significant number of samples.The second algorithm is based upon the variational Bayesian approximation (VBA). The latter was first introduced in the field of Bayesian inference for applications to neural networks, learning graphic models and model parameter estimation. Its appearance in the field of inverse problems is relatively recent, starting with source separation and image restoration. It consists in approximating the joint posterior distribution of all the unknowns by a free-form separable distribution that minimizes, with respect to the posterior law, the Kullback-Leibler divergence which has interesting properties for optimization and leads to an implicit parametric optimization scheme. Once the approximate distribution is built up, the estimator can be easily obtained.A solution to this functional optimization problem can be found in terms of exponential distributions whose shape parameters are estimated iteratively. It can be noted that, at each iteration, the updating expression for these parameters is similar to the one that could be obtained if a gradient method was used to solve the optimization problem. Moreover, the gradient and the step size have an interpretation in terms of statistical moments (means, variances, etc.).Both algorithms introduced herein are applied to two quite different configurations. The one related to microwave imaging is quasi-optimal: data are quasi-complete and frequency diverse. This means that the scattered fields are measured all around the object for several directions of illumination and several frequencies. The configuration used in optical imaging is less favorable since only aspect-limited data are available at a single frequency. This means that illuminations and measurements can only be performed in a limited angular sector. This limited aspect reinforces the ill-posedness of the inverse problem and makes essential the introduction of prior information. However, it will be shown that, in both cases, satisfactory results are obtained

    Deep Injective Prior for Inverse Scattering

    Full text link
    In electromagnetic inverse scattering, the goal is to reconstruct object permittivity using scattered waves. While deep learning has shown promise as an alternative to iterative solvers, it is primarily used in supervised frameworks which are sensitive to distribution drift of the scattered fields, common in practice. Moreover, these methods typically provide a single estimate of the permittivity pattern, which may be inadequate or misleading due to noise and the ill-posedness of the problem. In this paper, we propose a data-driven framework for inverse scattering based on deep generative models. Our approach learns a low-dimensional manifold as a regularizer for recovering target permittivities. Unlike supervised methods that necessitate both scattered fields and target permittivities, our method only requires the target permittivities for training; it can then be used with any experimental setup. We also introduce a Bayesian framework for approximating the posterior distribution of the target permittivity, enabling multiple estimates and uncertainty quantification. Extensive experiments with synthetic and experimental data demonstrate that our framework outperforms traditional iterative solvers, particularly for strong scatterers, while achieving comparable reconstruction quality to state-of-the-art supervised learning methods like the U-Net.Comment: 13 pages, 11 figure

    Tensor Computation: A New Framework for High-Dimensional Problems in EDA

    Get PDF
    Many critical EDA problems suffer from the curse of dimensionality, i.e. the very fast-scaling computational burden produced by large number of parameters and/or unknown variables. This phenomenon may be caused by multiple spatial or temporal factors (e.g. 3-D field solvers discretizations and multi-rate circuit simulation), nonlinearity of devices and circuits, large number of design or optimization parameters (e.g. full-chip routing/placement and circuit sizing), or extensive process variations (e.g. variability/reliability analysis and design for manufacturability). The computational challenges generated by such high dimensional problems are generally hard to handle efficiently with traditional EDA core algorithms that are based on matrix and vector computation. This paper presents "tensor computation" as an alternative general framework for the development of efficient EDA algorithms and tools. A tensor is a high-dimensional generalization of a matrix and a vector, and is a natural choice for both storing and solving efficiently high-dimensional EDA problems. This paper gives a basic tutorial on tensors, demonstrates some recent examples of EDA applications (e.g., nonlinear circuit modeling and high-dimensional uncertainty quantification), and suggests further open EDA problems where the use of tensor computation could be of advantage.Comment: 14 figures. Accepted by IEEE Trans. CAD of Integrated Circuits and System

    Bayesian Variational Regularisation for Dark Matter Reconstruction with Uncertainty Quantification

    Get PDF
    Despite the great wealth of cosmological knowledge accumulated since the early 20th century, the nature of dark-matter, which accounts for ~85% of the matter content of the universe, remains illusive. Unfortunately, though dark-matter is scientifically interesting, with implications for our fundamental understanding of the Universe, it cannot be directly observed. Instead, dark-matter may be inferred from e.g. the optical distortion (lensing) of distant galaxies which, at linear order, manifests as a perturbation to the apparent magnitude (convergence) and ellipticity (shearing). Ensemble observations of the shear are collected and leveraged to construct estimates of the convergence, which can directly be related to the universal dark-matter distribution. Imminent stage IV surveys are forecast to accrue an unprecedented quantity of cosmological information; a discriminative partition of which is accessible through the convergence, and is disproportionately concentrated at high angular resolutions, where the echoes of cosmological evolution under gravity are most apparent. Capitalising on advances in probability concentration theory, this thesis merges the paradigms of Bayesian inference and optimisation to develop hybrid convergence inference techniques which are scalable, statistically principled, and operate over the Euclidean plane, celestial sphere, and 3-dimensional ball. Such techniques can quantify the plausibility of inferences at one-millionth the computational overhead of competing sampling methods. These Bayesian techniques are applied to the hotly debated Abell-520 merging cluster, concluding that observational catalogues contain insufficient information to determine the existence of dark-matter self-interactions. Further, these techniques were applied to all public lensing catalogues, recovering the then largest global dark-matter mass-map. The primary methodological contributions of this thesis depend only on posterior log-concavity, paving the way towards a, potentially revolutionary, complete hybridisation with artificial intelligence techniques. These next-generation techniques are the first to operate over the full 3-dimensional ball, laying the foundations for statistically principled universal dark-matter cartography, and the cosmological insights such advances may provide

    Metric Gaussian variational inference

    Get PDF
    One main result of this dissertation is the development of Metric Gaussian Variational Inference (MGVI), a method to perform approximate inference in extremely high dimensions and for complex probabilistic models. The problem with high-dimensional and complex models is twofold. Fist, to capture the true posterior distribution accurately, a sufficiently rich approximation for it is required. Second, the number of parameters to express this richness scales dramatically with the number of model parameters. For example, explicitly expressing the correlation between all model parameters requires their squared number of correlation coefficients. In settings with millions of model parameter, this is unfeasible. MGVI overcomes this limitation by replacing the explicit covariance with an implicit approximation, which does not have to be stored and is accessed via samples. This procedure scales linearly with the problem size and allows to account for the full correlations in even extremely large problems. This makes it also applicable to significantly more complex setups. MGVI enabled a series of ambitious signal reconstructions by me and others, which will be showcased. This involves a time- and frequency-resolved reconstruction of the shadow around the black hole M87* using data provided by the Event Horizon Telescope Collaboration, a three-dimensional tomographic reconstruction of interstellar dust within 300pc around the sun from Gaia starlight-absorption and parallax data, novel medical imaging methods for computed tomography, an all-sky Faraday rotation map, combining distinct data sources, and simultaneous calibration and imaging with a radio-interferometer. The second main result is an an approach to use several, independently trained and deep neural networks to reason on complex tasks. Deep learning allows to capture abstract concepts by extracting them from large amounts of training data, which alleviates the necessity of an explicit mathematical formulation. Here a generative neural network is used as a prior distribution and certain properties are imposed via classification and regression networks. The inference is then performed in terms of the latent variables of the generator, which is done using MGVI and other methods. This allows to flexibly answer novel questions without having to re-train any neural network and to come up with novel answers through Bayesian reasoning. This novel approach of Bayesian reasoning with neural networks can also be combined with conventional measurement data

    Microstructure design of magneto-dielectric materials via topology optimization

    Get PDF
    Engineered materials, such as new composites, electromagnetic bandgap and periodic structures have attracted considerable interest in recent years due to their remarkable and unique electromagnetic behavior. As a result, an extensive literature on the theory and application of artificially modified materials exists. Examples include photonic crystals (regular, degenerate or magnetic) illustrating that extraordinary gain and high transmittance can be achieved at specific frequencies. Of importance is that recent investigations of material loading demonstrate that substantial improvements in antenna performance (smaller size, larger bandwidth, higher gain etc.) can be attained by loading bulk materials such as ferrites or by simply grading the material subject to specific design objectives. Multi-tone ceramic materials have also been used for miniaturization and pliable polymers offer new possibilities in three dimensional antenna design and multilayer printed structures, including 3D electronics. However, as the variety of examples in the literature shows, the perfect combination of materials is unique and extremely difficult to determine without optimization. In addition, existing artificial dielectrics are mostly based on intuitive studies, i.e. a formal design framework to predict the exact spatial combination of dielectrics, magnetics and conductors does not exist. In the first part of this thesis, an inverse design framework integrating FE based analysis tool (COMSOL MULTIPHYSICS-PDE Coefficient Module) with an optimization technique (MATLAB-Genetic Algorithm and Direct Search toolbox) suitable for designing the microstructure of artificial magneto-dielectrics from isotropic material phases is proposed. Homogenizing Maxwell's Equations (MEQ) in order to estimate the effective material parameters of the desired composite made of periodic microstructures is the initial task of the framework. The FE analysis tool is used to evaluate intermediate fields at the "micro-scale" level of a unit cell that is integrated with the homogenized MEQ's in order to estimate the "macro-scale" effective constitutive parameters of the overall bulk periodic structure. Simulation of the periodic structure is an extremely challenging task due to the mesh at micro-level (inclusions much smaller than the periodic cell dimension) that spans over the entire bulk structure turning the computational problem into a very intensive one. Therefore, the proposed framework based on the solution of homogenized MEQ's via the micro-macro approach, allows topology design capabilities of microstructures with desired properties. The goal is to achieve predefined material constitutive parameters via artificial electromagnetic substrates. Physical material bounds on the attainable properties are studied to avoid infeasible effective parameter requirements via available multi-constituents. The proposed framework is applied on examples such as microstructure layers of non-reciprocal magnetic photonic crystals. Results show that the homogenization technique along with topology optimization is able to design non-intuitive material compositions with desired electromagnetic properties. In the second part of the thesis, approximation techniques to speed-up large scale topology optimization studies of devices with complex frequency responses are investigated. Miniaturization of microstrip antennas via topology optimization of both the conductor and material substrate via multi-tone ceramic shades is a typical example treated here. Long computational times required for both the electromagnetic analysis over a frequency range and the need for a heuristic based optimization tool to locate the global minima for complex devices present themselves as two important bottlenecks for practical design studies. In this thesis, two new techniques for speeding up the optimization process by reducing the number of frequency calls needed to accurately predict a multi-resonance type response of a candidate design are proposed. The proposed techniques employ adaptive sampling methods along with novel rational function interpolations. The first technique relies on a heuristic based rational interpolation using Bayes' theory and rational functions. Second, a rational function interpolation employing a new adaptive path based on Stoer-Bulirsch algorithm is used. Both techniques prove to efficiently predict resonances and significantly reduce the computational time by at least three folds

    Bayesian Inference for Inverse Problems

    Get PDF
    Inverse problems arise everywhere we have indirect measurement. Regularization and Bayesian inference methods are two main approaches to handle inverse problems. Bayesian inference approach is more general and has much more tools for developing efficient methods for difficult problems. In this chapter, first, an overview of the Bayesian parameter estimation is presented, then we see the extension for inverse problems. The main difficulty is the great dimension of unknown quantity and the appropriate choice of the prior law. The second main difficulty is the computational aspects. Different approximate Bayesian computations and in particular the variational Bayesian approximation (VBA) methods are explained in details

    Approches bayésiennes en tomographie micro-ondes : applications à l'imagerie du cancer du sein

    Get PDF
    This work concerns the problem of microwave tomography for application to biomedical imaging. The aim is to retreive both permittivity and conductivity of an unknown object from measurements of the scattered field that results from its interaction with a known interrogating wave. Such a problem is said to be inverse opposed to the associated forward problem that consists in calculating the scattered field while the interrogating wave and the object are known. The resolution of the inverse problem requires the prior construction of the associated forward model. This latter is based on an integral representation of the electric field resulting in two coupled integral equations whose discrete counterparts are obtained by means of the method of moments.Regarding the inverse problem, in addition to the fact that the physical equations involved in the forward modeling make it nonlinear, it is also mathematically ill-posed in the sense of Hadamard, which means that the conditions of existence, uniqueness and stability of the solution are not simultaneously guaranteed. Hence, solving this problem requires its prior regularization which usually involves the introduction of a priori information on the sought solution. This resolution is done here in a Bayesian probabilistic framework where we introduced a priori knowledge appropriate to the sought object by considering it to be composed of a finite number of homogeneous materials distributed in compact and homogeneous regions. This information is introduced through a "Gauss-Markov-Potts" model. In addition, the Bayesian computation gives the posterior distribution of all the unknowns, knowing the a priori and the object. We proceed then to identify the posterior estimators via variational approximation methods and thereby to reconstruct the image of the desired object.The main contributions of this work are methodological and algorithmic. They are illustrated by an application of microwave imaging to breast cancer detection. The latter is in itself a very important and original aspect of the thesis. Indeed, the detection of breast cancer using microwave imaging is a very interesting alternative to X-ray mammography, but it is still at an exploratory stage.Ce travail concerne l'imagerie micro-onde en vue d'application à l'imagerie biomédicale. Cette technique d'imagerie a pour objectif de retrouver la distribution des propriétés diélectriques internes (permittivité diélectrique et conductivité) d'un objet inconnu illuminé par une onde interrogatrice connue à partir des mesures du champ électrique dit diffracté résultant de leur interaction. Un tel problème constitue un problème dit inverse par opposition au problème direct associé qui consiste à calculer le champ diffracté, l'onde interrogatrice et l'objet étant alors connus.La résolution du problème inverse nécessite la construction préalable du modèle direct associé. Celui-ci est ici basé sur une représentation intégrale de domaine des champs électriques donnant naissance à deux équations intégrales couplées dont les contreparties discrètes sont obtenues à l'aide de la méthode des moments. En ce qui concerne le problème inverse, hormis le fait que les équations physiques qui interviennent dans sa modélisation directe le rendent non-linéaire, il est également mathématiquement mal posé au sens de Hadamard, ce qui signifie que les conditions d'existence, d'unicité et de stabilité de la solution ne sont pas simultanément garanties. La résolution d'un tel problème nécessite sa régularisation préalable qui consiste généralement en l'introduction d'information a priori sur la solution recherchée. Cette résolution est effectuée, ici, dans un cadre probabiliste bayésien où l'on introduit une connaissance a priori adaptée à l'objet sous test et qui consiste à considérer ce dernier comme étant composé d'un nombre fini de matériaux homogènes distribués dans des régions compactes. Cet information est introduite par le biais d'un modèle de « Gauss-Markov-Potts ». De plus, le calcul bayésien nous donne la distribution a posteriori de toutes les inconnues connaissant l'a priori et l'objet. On s'attache ensuite à déterminer les estimateurs a posteriori via des méthodes d'approximation variationnelles et à reconstruire ainsi l'image de l'objet recherché. Les principales contributions de ce travail sont d'ordre méthodologique et algorithmique. Elles sont illustrées par une application de l'imagerie micro-onde à la détection du cancer du sein. Cette dernière constitue en soi un point très important et original de la thèse. En effet, la détection du cancer su sein en imagerie micro-onde est une alternative très intéressante à la mammographie par rayons X, mais n'en est encore qu'à un stade exploratoire

    Informationstheorie basierte Hochenergiephotonenbildgebung

    Get PDF
    corecore