655 research outputs found
Kernel Belief Propagation
We propose a nonparametric generalization of belief propagation, Kernel
Belief Propagation (KBP), for pairwise Markov random fields. Messages are
represented as functions in a reproducing kernel Hilbert space (RKHS), and
message updates are simple linear operations in the RKHS. KBP makes none of the
assumptions commonly required in classical BP algorithms: the variables need
not arise from a finite domain or a Gaussian distribution, nor must their
relations take any particular parametric form. Rather, the relations between
variables are represented implicitly, and are learned nonparametrically from
training data. KBP has the advantage that it may be used on any domain where
kernels are defined (Rd, strings, groups), even where explicit parametric
models are not known, or closed form expressions for the BP updates do not
exist. The computational cost of message updates in KBP is polynomial in the
training data size. We also propose a constant time approximate message update
procedure by representing messages using a small number of basis functions. In
experiments, we apply KBP to image denoising, depth prediction from still
images, and protein configuration prediction: KBP is faster than competing
classical and nonparametric approaches (by orders of magnitude, in some cases),
while providing significantly more accurate results
A Multiscale Approach for Statistical Characterization of Functional Images
Increasingly, scientific studies yield functional image data, in which the observed data consist of sets of curves recorded on the pixels of the image. Examples include temporal brain response intensities measured by fMRI and NMR frequency spectra measured at each pixel. This article presents a new methodology for improving the characterization of pixels in functional imaging, formulated as a spatial curve clustering problem. Our method operates on curves as a unit. It is nonparametric and involves multiple stages: (i) wavelet thresholding, aggregation, and Neyman truncation to effectively reduce dimensionality; (ii) clustering based on an extended EM algorithm; and (iii) multiscale penalized dyadic partitioning to create a spatial segmentation. We motivate the different stages with theoretical considerations and arguments, and illustrate the overall procedure on simulated and real datasets. Our method appears to offer substantial improvements over monoscale pixel-wise methods. An Appendix which gives some theoretical justifications of the methodology, computer code, documentation and dataset are available in the online supplements
Hyperspectral image unmixing using a multiresolution sticky HDP
This paper is concerned with joint Bayesian endmember extraction and linear unmixing of hyperspectral images using a spatial prior on the abundance vectors.We propose a generative model for hyperspectral images in which the abundances are sampled from a Dirichlet distribution (DD) mixture model, whose parameters depend on a latent label process. The label process is then used to enforces a spatial prior which encourages adjacent pixels to have the same label. A Gibbs sampling framework is used to generate samples from the posterior distributions of the abundances and the parameters of the DD mixture model. The spatial prior that is used is a tree-structured sticky hierarchical Dirichlet process (SHDP) and, when used to determine the posterior endmember and abundance distributions, results in a new unmixing algorithm called spatially constrained unmixing (SCU). The directed Markov model facilitates the use of scale-recursive estimation algorithms, and is therefore more computationally efficient as compared to standard Markov random field (MRF) models. Furthermore, the proposed SCU algorithm estimates the number of regions in the image in an unsupervised fashion. The effectiveness of the proposed SCU algorithm is illustrated using synthetic and real data
WARP: Wavelets with adaptive recursive partitioning for multi-dimensional data
Effective identification of asymmetric and local features in images and other
data observed on multi-dimensional grids plays a critical role in a wide range
of applications including biomedical and natural image processing. Moreover,
the ever increasing amount of image data, in terms of both the resolution per
image and the number of images processed per application, requires algorithms
and methods for such applications to be computationally efficient. We develop a
new probabilistic framework for multi-dimensional data to overcome these
challenges through incorporating data adaptivity into discrete wavelet
transforms, thereby allowing them to adapt to the geometric structure of the
data while maintaining the linear computational scalability. By exploiting a
connection between the local directionality of wavelet transforms and recursive
dyadic partitioning on the grid points of the observation, we obtain the
desired adaptivity through adding to the traditional Bayesian wavelet
regression framework an additional layer of Bayesian modeling on the space of
recursive partitions over the grid points. We derive the corresponding
inference recipe in the form of a recursive representation of the exact
posterior, and develop a class of efficient recursive message passing
algorithms for achieving exact Bayesian inference with a computational
complexity linear in the resolution and sample size of the images. While our
framework is applicable to a range of problems including multi-dimensional
signal processing, compression, and structural learning, we illustrate its work
and evaluate its performance in the context of 2D and 3D image reconstruction
using real images from the ImageNet database. We also apply the framework to
analyze a data set from retinal optical coherence tomography
Recommended from our members
Efficient Variational Inference for Hierarchical Models of Images, Text, and Networks
Variational inference provides a general optimization framework to approximate the posterior distributions of latent variables in probabilistic models. Although effective in simple scenarios, variational inference may be inaccurate or infeasible when the data is high-dimensional, the model structure is complicated, or variable relationships are non-conjugate. We propose solutions to these problems through the smart design and leverage of model structures, the rigorous derivation of variational bounds, and the creation of flexible algorithms for various models with rich, non-conjugate dependencies.Concretely, we first design an interpretable generative model for natural images, in which the hundreds of thousands of pixels per image are split into small patches represented by Gaussian mixture models. Through structured variational inference, the evidence lower bound of this model automatically recovers the popular expected patch log-likelihood method for image processing. A nonparametric extension using hierarchical Dirichlet processes further enables self-similarities to be captured and image-specific clusters created during inference, boosting image denoising and inpainting accuracy.Then we move on to text data, and design hierarchical topic graphs that generalize the bipartite noisy-OR models previously used for medical diagnosis. We derive auxiliary bounds to overcome the non-conjugacy of noisy-OR conditionals, and use stochastic variational inference to efficiently train on datasets with hundreds of thousands of documents. We dramatically increase the algorithm speed through a constrained family of variational bounds, so that only the ancestors of the sparse observed tokens of each document need to be considered.Finally, we propose a general-purpose Monte Carlo variational inference strategy that is directly applicable to any model with discrete variables. Compared to REINFORCE-style stochastic gradient updates, our coordinate-ascent updates have lower variance and converge much faster. Compared to auxiliary-variable bounds crafted for each individual model, our algorithm is simpler to derive and may be easily integrated into probabilistic programming languages for broader use. By avoiding auxiliary variables, we also tighten likelihood bounds and increase robustness to local optima. Extensive experiments on real-world models of images, text, and networks illustrate these appealing advantages
A graph-based signal processing approach for low-rate energy disaggregation
Graph-based signal processing (GSP) is an emerging field that is based on representing a dataset using a discrete signal indexed by a graph. Inspired by the recent success of GSP in image processing and signal filtering, in this paper, we demonstrate how GSP can be applied to non-intrusive appliance load monitoring (NALM) due to smoothness of appliance load signatures. NALM refers to disaggregating total energy consumption in the house down to individual appliances used. At low sampling rates, in the order of minutes, NALM is a difficult problem, due to significant random noise, unknown base load, many household appliances that have similar power signatures, and the fact that most domestic appliances (for example, microwave, toaster), have usual operation of just over a minute. In this paper, we proposed a different NALM approach to more traditional approaches, by representing the dataset of active power signatures using a graph signal. We develop a regularization on graph approach where by maximizing smoothness of the underlying graph signal, we are able to perform disaggregation. Simulation results using publicly available REDD dataset demonstrate potential of the GSP for energy disaggregation and competitive performance with respect to more complex Hidden Markov Model-based approaches
- …