2,927 research outputs found

    Edge-promoting reconstruction of absorption and diffusivity in optical tomography

    Get PDF
    In optical tomography a physical body is illuminated with near-infrared light and the resulting outward photon flux is measured at the object boundary. The goal is to reconstruct internal optical properties of the body, such as absorption and diffusivity. In this work, it is assumed that the imaged object is composed of an approximately homogeneous background with clearly distinguishable embedded inhomogeneities. An algorithm for finding the maximum a posteriori estimate for the absorption and diffusion coefficients is introduced assuming an edge-preferring prior and an additive Gaussian measurement noise model. The method is based on iteratively combining a lagged diffusivity step and a linearization of the measurement model of diffuse optical tomography with priorconditioned LSQR. The performance of the reconstruction technique is tested via three-dimensional numerical experiments with simulated measurement data.Comment: 18 pages, 6 figure

    Adaptive Langevin Sampler for Separation of t-Distribution Modelled Astrophysical Maps

    Full text link
    We propose to model the image differentials of astrophysical source maps by Student's t-distribution and to use them in the Bayesian source separation method as priors. We introduce an efficient Markov Chain Monte Carlo (MCMC) sampling scheme to unmix the astrophysical sources and describe the derivation details. In this scheme, we use the Langevin stochastic equation for transitions, which enables parallel drawing of random samples from the posterior, and reduces the computation time significantly (by two orders of magnitude). In addition, Student's t-distribution parameters are updated throughout the iterations. The results on astrophysical source separation are assessed with two performance criteria defined in the pixel and the frequency domains.Comment: 12 pages, 6 figure

    Multiscale Dictionary Learning for Estimating Conditional Distributions

    Full text link
    Nonparametric estimation of the conditional distribution of a response given high-dimensional features is a challenging problem. It is important to allow not only the mean but also the variance and shape of the response density to change flexibly with features, which are massive-dimensional. We propose a multiscale dictionary learning model, which expresses the conditional response density as a convex combination of dictionary densities, with the densities used and their weights dependent on the path through a tree decomposition of the feature space. A fast graph partitioning algorithm is applied to obtain the tree decomposition, with Bayesian methods then used to adaptively prune and average over different sub-trees in a soft probabilistic manner. The algorithm scales efficiently to approximately one million features. State of the art predictive performance is demonstrated for toy examples and two neuroscience applications including up to a million features
    corecore