1,601 research outputs found

    Semi-supervised linear spectral unmixing using a hierarchical Bayesian model for hyperspectral imagery

    Get PDF
    This paper proposes a hierarchical Bayesian model that can be used for semi-supervised hyperspectral image unmixing. The model assumes that the pixel reflectances result from linear combinations of pure component spectra contaminated by an additive Gaussian noise. The abundance parameters appearing in this model satisfy positivity and additivity constraints. These constraints are naturally expressed in a Bayesian context by using appropriate abundance prior distributions. The posterior distributions of the unknown model parameters are then derived. A Gibbs sampler allows one to draw samples distributed according to the posteriors of interest and to estimate the unknown abundances. An extension of the algorithm is finally studied for mixtures with unknown numbers of spectral components belonging to a know library. The performance of the different unmixing strategies is evaluated via simulations conducted on synthetic and real data

    Importance sampling schemes for evidence approximation in mixture models

    Full text link
    The marginal likelihood is a central tool for drawing Bayesian inference about the number of components in mixture models. It is often approximated since the exact form is unavailable. A bias in the approximation may be due to an incomplete exploration by a simulated Markov chain (e.g., a Gibbs sequence) of the collection of posterior modes, a phenomenon also known as lack of label switching, as all possible label permutations must be simulated by a chain in order to converge and hence overcome the bias. In an importance sampling approach, imposing label switching to the importance function results in an exponential increase of the computational cost with the number of components. In this paper, two importance sampling schemes are proposed through choices for the importance function; a MLE proposal and a Rao-Blackwellised importance function. The second scheme is called dual importance sampling. We demonstrate that this dual importance sampling is a valid estimator of the evidence and moreover show that the statistical efficiency of estimates increases. To reduce the induced high demand in computation, the original importance function is approximated but a suitable approximation can produce an estimate with the same precision and with reduced computational workload.Comment: 24 pages, 5 figure
    • 

    corecore