991 research outputs found
Scalable transformed additive signal decomposition by non-conjugate Gaussian process inference
Many functions and signals of interest are formed by the addition of multiple underlying components, often nonlinearly transformed and modified by noise. Examples may be found in the literature on Generalized Additive Models [1] and Underdetermined Source Separation [2] or other mode decomposition techniques. Recovery of the underlying component processes often depends on finding and exploiting statistical regularities within them. Gaussian Processes (GPs) [3] have become the dominant way to model statistical expectations over functions. Recent advances make inference of the GP posterior efficient for large scale datasets and arbitrary likelihoods [4,5]. Here we extend these methods to the additive GP case [6, 7], thus achieving scalable marginal posterior inference over each latent function in settings such as those above
Bayesian Semi-supervised Learning with Graph Gaussian Processes
We propose a data-efficient Gaussian process-based Bayesian approach to the
semi-supervised learning problem on graphs. The proposed model shows extremely
competitive performance when compared to the state-of-the-art graph neural
networks on semi-supervised learning benchmark experiments, and outperforms the
neural networks in active learning experiments where labels are scarce.
Furthermore, the model does not require a validation data set for early
stopping to control over-fitting. Our model can be viewed as an instance of
empirical distribution regression weighted locally by network connectivity. We
further motivate the intuitive construction of the model with a Bayesian linear
model interpretation where the node features are filtered by an operator
related to the graph Laplacian. The method can be easily implemented by
adapting off-the-shelf scalable variational inference algorithms for Gaussian
processes.Comment: To appear in NIPS 2018 Fixed an error in Figure 2. The previous arxiv
version contains two identical sub-figure
Sparse Gaussian Process Audio Source Separation Using Spectrum Priors in the Time-Domain
Gaussian process (GP) audio source separation is a time-domain approach that
circumvents the inherent phase approximation issue of spectrogram based
methods. Furthermore, through its kernel, GPs elegantly incorporate prior
knowledge about the sources into the separation model. Despite these compelling
advantages, the computational complexity of GP inference scales cubically with
the number of audio samples. As a result, source separation GP models have been
restricted to the analysis of short audio frames. We introduce an efficient
application of GPs to time-domain audio source separation, without compromising
performance. For this purpose, we used GP regression, together with spectral
mixture kernels, and variational sparse GPs. We compared our method with
LD-PSDTF (positive semi-definite tensor factorization), KL-NMF
(Kullback-Leibler non-negative matrix factorization), and IS-NMF (Itakura-Saito
NMF). Results show that the proposed method outperforms these techniques.Comment: Paper submitted to the 44th International Conference on Acoustics,
Speech, and Signal Processing, ICASSP 2019. To be held in Brighton, United
Kingdom, between May 12 and May 17, 201
- …