9,327 research outputs found

    Quantifying Uncertainty in High Dimensional Inverse Problems by Convex Optimisation

    Full text link
    Inverse problems play a key role in modern image/signal processing methods. However, since they are generally ill-conditioned or ill-posed due to lack of observations, their solutions may have significant intrinsic uncertainty. Analysing and quantifying this uncertainty is very challenging, particularly in high-dimensional problems and problems with non-smooth objective functionals (e.g. sparsity-promoting priors). In this article, a series of strategies to visualise this uncertainty are presented, e.g. highest posterior density credible regions, and local credible intervals (cf. error bars) for individual pixels and superpixels. Our methods support non-smooth priors for inverse problems and can be scaled to high-dimensional settings. Moreover, we present strategies to automatically set regularisation parameters so that the proposed uncertainty quantification (UQ) strategies become much easier to use. Also, different kinds of dictionaries (complete and over-complete) are used to represent the image/signal and their performance in the proposed UQ methodology is investigated.Comment: 5 pages, 5 figure

    Maximum-a-posteriori estimation with Bayesian confidence regions

    Full text link
    Solutions to inverse problems that are ill-conditioned or ill-posed may have significant intrinsic uncertainty. Unfortunately, analysing and quantifying this uncertainty is very challenging, particularly in high-dimensional problems. As a result, while most modern mathematical imaging methods produce impressive point estimation results, they are generally unable to quantify the uncertainty in the solutions delivered. This paper presents a new general methodology for approximating Bayesian high-posterior-density credibility regions in inverse problems that are convex and potentially very high-dimensional. The approximations are derived by using recent concentration of measure results related to information theory for log-concave random vectors. A remarkable property of the approximations is that they can be computed very efficiently, even in large-scale problems, by using standard convex optimisation techniques. In particular, they are available as a by-product in problems solved by maximum-a-posteriori estimation. The approximations also have favourable theoretical properties, namely they outer-bound the true high-posterior-density credibility regions, and they are stable with respect to model dimension. The proposed methodology is illustrated on two high-dimensional imaging inverse problems related to tomographic reconstruction and sparse deconvolution, where the approximations are used to perform Bayesian hypothesis tests and explore the uncertainty about the solutions, and where proximal Markov chain Monte Carlo algorithms are used as benchmark to compute exact credible regions and measure the approximation error

    Revisiting maximum-a-posteriori estimation in log-concave models

    Get PDF
    Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation methodology in imaging sciences, where high dimensionality is often addressed by using Bayesian models that are log-concave and whose posterior mode can be computed efficiently by convex optimisation. Despite its success and wide adoption, MAP estimation is not theoretically well understood yet. The prevalent view in the community is that MAP estimation is not proper Bayesian estimation in a decision-theoretic sense because it does not minimise a meaningful expected loss function (unlike the minimum mean squared error (MMSE) estimator that minimises the mean squared loss). This paper addresses this theoretical gap by presenting a decision-theoretic derivation of MAP estimation in Bayesian models that are log-concave. A main novelty is that our analysis is based on differential geometry, and proceeds as follows. First, we use the underlying convex geometry of the Bayesian model to induce a Riemannian geometry on the parameter space. We then use differential geometry to identify the so-called natural or canonical loss function to perform Bayesian point estimation in that Riemannian manifold. For log-concave models, this canonical loss is the Bregman divergence associated with the negative log posterior density. We then show that the MAP estimator is the only Bayesian estimator that minimises the expected canonical loss, and that the posterior mean or MMSE estimator minimises the dual canonical loss. We also study the question of MAP and MSSE estimation performance in large scales and establish a universal bound on the expected canonical error as a function of dimension, offering new insights into the good performance observed in convex problems. These results provide a new understanding of MAP and MMSE estimation in log-concave settings, and of the multiple roles that convex geometry plays in imaging problems.Comment: Accepted for publication in SIAM Imaging Science

    Why the effective-mass approximation works so well for nano-structures

    Full text link
    The reason why the effective-mass approximation, derived for wave packets constructed from infinite-periodic-systems' wave functions, works so well with nanoscopic structures, has been an enigma and a challenge for theorists. To explain and clarify this issue, we re-derive the effective-mass approximation in the framework of the theory of finite periodic systems, i.e., using energy eigenvalues and fast-varying eigenfunctions, obtained with analytical methods where the finiteness of the number of primitive cells per layer, in the direction of growth, is a prerequisite and an essential condition. This derivation justifies and explains why the effective-mass approximation works so well for nano-structures. We show also with explicit optical-response calculations that the rapidly varying eigenfunctions Φϵ0,η0(z)\Phi_{\epsilon_0,\eta_0}(z) of the one-band wave functions Ψμ,νϵ0,η0(z)=Ψμ,νϵ0(z)Φϵ0,η0(z)\Psi^{\epsilon_0,\eta_0}_{\mu,\nu}(z)= \Psi^{\epsilon_0}_{\mu,\nu}(z) \Phi_{\epsilon_0,\eta_0}(z), can be safely dropped out for the calculation of inter-band transition matrix elements.Comment: 6 page

    New approach to study light-emission of periodic structures. Unveiling novel surface-states effects

    Full text link
    An accurate approach to calculate the optical response of periodic structures is proposed. Using the genuine superlattice eigenfunctions and energy eigenvalues, the eigenfunctions parity symmetries, the subband symmetries and the detached surface energy levels, we report new optical-transition selection rules and explicit optical-response calculations. Observed transitions that were considered forbidden, become allowed and interesting optical-spectra effects emerge as fingerprints of intra-subband and surface states. The unexplained groups and isolated narrow peaks observed in high resolution blue-laser spectra, by Nakamura et al., are now fully explained and faithfully reproducedComment: 5 pages, 6 figure

    II—Resemblance Nominalism, Conjunctions and Truthmakers

    Get PDF
    The resemblance nominalist says that the truthmaker of 〈Socrates is white〉 ultimately involves only concrete particulars that resemble each other. Furthermore he also says that Socrates and Plato are the truthmakers of 〈Socrates resembles Plato〉, and Socrates and Aristotle those of 〈Socrates resembles Aristotle〉. But this, combined with a principle about the truthmakers of conjunctions, leads to the apparently implausible conclusion that 〈Socrates resembles Plato and Socrates resembles Aristotle〉 and 〈Socrates resembles Plato and Plato resembles Aristotle〉 have the same truthmakers, namely, Socrates, Plato and Aristotle. I shall argue that the resemblance nominalist can say that those conjunctions have the same truthmakers but these truthmakers make them true in different ways. I shall also use this view to account for the truthmakers of propositions like 〈Socrates is white〉, and respond to previous objections by Cian Dorr and Jessica Wilson
    • …
    corecore