495 research outputs found
Bayesian Nonparametric Unmixing of Hyperspectral Images
Hyperspectral imaging is an important tool in remote sensing, allowing for
accurate analysis of vast areas. Due to a low spatial resolution, a pixel of a
hyperspectral image rarely represents a single material, but rather a mixture
of different spectra. HSU aims at estimating the pure spectra present in the
scene of interest, referred to as endmembers, and their fractions in each
pixel, referred to as abundances. Today, many HSU algorithms have been
proposed, based either on a geometrical or statistical model. While most
methods assume that the number of endmembers present in the scene is known,
there is only little work about estimating this number from the observed data.
In this work, we propose a Bayesian nonparametric framework that jointly
estimates the number of endmembers, the endmembers itself, and their
abundances, by making use of the Indian Buffet Process as a prior for the
endmembers. Simulation results and experiments on real data demonstrate the
effectiveness of the proposed algorithm, yielding results comparable with
state-of-the-art methods while being able to reliably infer the number of
endmembers. In scenarios with strong noise, where other algorithms provide only
poor results, the proposed approach tends to overestimate the number of
endmembers slightly. The additional endmembers, however, often simply represent
noisy replicas of present endmembers and could easily be merged in a
post-processing step
The supervised IBP: neighbourhood preserving infinite latent feature models
We propose a probabilistic model to infer supervised latent variables in the Hamming space from observed data. Our model allows simultaneous inference of the number of binary latent variables, and their values. The latent variables preserve neighbourhood structure of the data in a sense that objects in the same semantic concept have similar latent values, and objects in different concepts have dissimilar latent values. We formulate the supervised infinite latent variable problem based on an intuitive principle of pulling objects together if they are of the same type, and pushing them apart if they are not. We then combine this principle with a flexible Indian Buffet Process prior on the latent variables. We show that the inferred supervised latent variables can be directly used to perform a nearest neighbour search for the purpose of retrieval. We introduce a new application of dynamically extending hash codes, and show how to effectively couple the structure of the hash codes with continuously growing structure of the neighbourhood preserving infinite latent feature space
Posterior Contraction Rates of the Phylogenetic Indian Buffet Processes
By expressing prior distributions as general stochastic processes,
nonparametric Bayesian methods provide a flexible way to incorporate prior
knowledge and constrain the latent structure in statistical inference. The
Indian buffet process (IBP) is such an example that can be used to define a
prior distribution on infinite binary features, where the exchangeability among
subjects is assumed. The phylogenetic Indian buffet process (pIBP), a
derivative of IBP, enables the modeling of non-exchangeability among subjects
through a stochastic process on a rooted tree, which is similar to that used in
phylogenetics, to describe relationships among the subjects. In this paper, we
study the theoretical properties of IBP and pIBP under a binary factor model.
We establish the posterior contraction rates for both IBP and pIBP and
substantiate the theoretical results through simulation studies. This is the
first work addressing the frequentist property of the posterior behaviors of
IBP and pIBP. We also demonstrated its practical usefulness by applying pIBP
prior to a real data example arising in the field of cancer genomics where the
exchangeability among subjects is violated
- …