7,594 research outputs found
Recommended from our members
Variational Bayesian image restoration with group-sparse modeling of wavelet coefficients
In this work, we present a recent wavelet-based image restoration framework based on a group-sparse Gaussian scale mixture model. A hierarchical Bayesian estimation is derived using a combination of variational Bayesian inference and a subband-adaptive majorization–minimization method that simplifies computation of the posterior distribution. We show that both of these iterative methods can converge together without needing nested loops, and thus good solutions can be found rapidly in the non-convex search space. We also integrate our method, variational Bayesian with majorization minimization (VBMM), with tree-structured modeling of the wavelet coefficients. This extension achieves significant gains in performance over the coefficient-sparse version of the algorithm. The experimental results demonstrate that the proposed method and its tree-structured extensions are effective for various imaging applications such as image deconvolution, image superresolution and compressive sensing magnetic resonance imaging (MRI) reconstruction, and that they outperform more conventional sparsity-inducing methods based on the _l1-norm.This is the author accepted manuscript. The final version is available from Elsevier at http://www.sciencedirect.com/science/article/pii/S1051200415001438
Nonparametric Bayesian Mixed-effect Model: a Sparse Gaussian Process Approach
Multi-task learning models using Gaussian processes (GP) have been developed
and successfully applied in various applications. The main difficulty with this
approach is the computational cost of inference using the union of examples
from all tasks. Therefore sparse solutions, that avoid using the entire data
directly and instead use a set of informative "representatives" are desirable.
The paper investigates this problem for the grouped mixed-effect GP model where
each individual response is given by a fixed-effect, taken from one of a set of
unknown groups, plus a random individual effect function that captures
variations among individuals. Such models have been widely used in previous
work but no sparse solutions have been developed. The paper presents the first
sparse solution for such problems, showing how the sparse approximation can be
obtained by maximizing a variational lower bound on the marginal likelihood,
generalizing ideas from single-task Gaussian processes to handle the
mixed-effect model as well as grouping. Experiments using artificial and real
data validate the approach showing that it can recover the performance of
inference with the full sample, that it outperforms baseline methods, and that
it outperforms state of the art sparse solutions for other multi-task GP
formulations.Comment: Preliminary version appeared in ECML201
Deep Exponential Families
We describe \textit{deep exponential families} (DEFs), a class of latent
variable models that are inspired by the hidden structures used in deep neural
networks. DEFs capture a hierarchy of dependencies between latent variables,
and are easily generalized to many settings through exponential families. We
perform inference using recent "black box" variational inference techniques. We
then evaluate various DEFs on text and combine multiple DEFs into a model for
pairwise recommendation data. In an extensive study, we show that going beyond
one layer improves predictions for DEFs. We demonstrate that DEFs find
interesting exploratory structure in large data sets, and give better
predictive performance than state-of-the-art models
- …