39,992 research outputs found

    Oscillating Gaussian Processes

    Full text link
    In this article we introduce and study oscillating Gaussian processes defined by Xt=Ξ±+Yt1Yt>0+Ξ±βˆ’Yt1Yt<0X_t = \alpha_+ Y_t {\bf 1}_{Y_t >0} + \alpha_- Y_t{\bf 1}_{Y_t<0}, where Ξ±+,Ξ±βˆ’>0\alpha_+,\alpha_->0 are free parameters and YY is either stationary or self-similar Gaussian process. We study the basic properties of XX and we consider estimation of the model parameters. In particular, we show that the moment estimators converge in LpL^p and are, when suitably normalised, asymptotically normal

    Deep Gaussian Processes

    Full text link
    In this paper we introduce deep Gaussian process (GP) models. Deep GPs are a deep belief network based on Gaussian process mappings. The data is modeled as the output of a multivariate GP. The inputs to that Gaussian process are then governed by another GP. A single layer model is equivalent to a standard GP or the GP latent variable model (GP-LVM). We perform inference in the model by approximate variational marginalization. This results in a strict lower bound on the marginal likelihood of the model which we use for model selection (number of layers and nodes per layer). Deep belief networks are typically applied to relatively large data sets using stochastic gradient descent for optimization. Our fully Bayesian treatment allows for the application of deep models even when data is scarce. Model selection by our variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples.Comment: 9 pages, 8 figures. Appearing in Proceedings of the 16th International Conference on Artificial Intelligence and Statistics (AISTATS) 201

    Distributed Gaussian Processes

    Get PDF
    To scale Gaussian processes (GPs) to large data sets we introduce the robust Bayesian Committee Machine (rBCM), a practical and scalable product-of-experts model for large-scale distributed GP regression. Unlike state-of-the-art sparse GP approximations, the rBCM is conceptually simple and does not rely on inducing or variational parameters. The key idea is to recursively distribute computations to independent computational units and, subsequently, recombine them to form an overall result. Efficient closed-form inference allows for straightforward parallelisation and distributed computations with a small memory footprint. The rBCM is independent of the computational graph and can be used on heterogeneous computing infrastructures, ranging from laptops to clusters. With sufficient computing resources our distributed GP model can handle arbitrarily large data sets.Comment: 10 pages, 5 figures. Appears in Proceedings of ICML 201

    Extremes of Independent Gaussian Processes

    Get PDF
    For every n∈Nn\in\N, let X1n,...,XnnX_{1n},..., X_{nn} be independent copies of a zero-mean Gaussian process Xn={Xn(t),t∈T}X_n=\{X_n(t), t\in T\}. We describe all processes which can be obtained as limits, as nβ†’βˆžn\to\infty, of the process an(Mnβˆ’bn)a_n(M_n-b_n), where Mn(t)=max⁑i=1,...,nXin(t)M_n(t)=\max_{i=1,...,n} X_{in}(t) and an,bna_n, b_n are normalizing constants. We also provide an analogous characterization for the limits of the process anLna_nL_n, where Ln(t)=min⁑i=1,...,n∣Xin(t)∣L_n(t)=\min_{i=1,...,n} |X_{in}(t)|.Comment: 19 page
    • …
    corecore