8 research outputs found

    Matérn Gaussian Processes on Graphs.

    Get PDF
    Gaussian processes are a versatile framework for learning unknown functions in a manner that permits one to utilize prior information about their properties. Although many different Gaussian process models are readily available when the input space is Euclidean, the choice is much more limited for Gaussian processes whose input space is an undirected graph. In this work, we leverage the stochastic partial differential equation characterization of Mat®ern Gaussian processes—a widelyused model class in the Euclidean setting—to study their analog for undirected graphs. We show that the resulting Gaussian processes inherit various attractive properties of their Euclidean and Riemannian analogs and provide techniques that allow them to be trained using standard methods, such as inducing points. This enables graph Mat®ern Gaussian processes to be employed in mini-batch and non-conjugate settings, thereby making them more accessible to practitioners and easier to deploy within larger learning frameworks

    Water hammer with column separation: A historical review

    Get PDF
    Column separation refers to the breaking of liquid columns in fully filled pipelines. This may occur in a water-hammer event when the pressure in a pipeline drops to the vapor pressure at specific locations such as closed ends, high points or knees (changes in pipe slope). The liquid columns are separated by a vapor cavity that grows and diminishes according to the dynamics of the system. The collision of two liquid columns, or of one liquid column with a closed end, may cause a large and nearly instantaneous rise in pressure. This pressure rise travels through the entire pipeline and forms a severe load for hydraulic machinery, individual pipes and supporting structures. The situation is even worse: in one water-hammer event many repetitions of cavity formation and collapse may occur. This paper reviews water hammer with column separation from the discovery of the phenomenon in the late 19th century, the recognition of its danger in the 1930s, the development of numerical methods in the 1960s and 1970s, to the standard models used in commercial software packages in the late 20th century. A comprehensive survey of laboratory tests and field measurements is given. The review focuses on transient vaporous cavitation. Gaseous cavitation and steam condensation are beyond the scope of the paper. © 2005 Elsevier Ltd. All rights reserved.A. Bergant, A.R. Simpson, and A.S. Tijsselinghttp://www.elsevier.com/wps/find/journaldescription.cws_home/622877/description#descriptio

    Matern Gaussian processes on Riemannian manifolds

    Get PDF
    Gaussian processes are an effective model class for learning unknown functions, particularly in settings where accurately representing predictive uncertainty is of key importance. Motivated by applications in the physical sciences, the widelyused Matern class of Gaussian processes has recently been generalized to model ® functions whose domains are Riemannian manifolds, by re-expressing said processes as solutions of stochastic partial differential equations. In this work, we propose techniques for computing the kernels of these processes via spectral theory of the Laplace–Beltrami operator in a fully constructive manner, thereby allowing them to be trained via standard scalable techniques such as inducing point methods. We also extend the generalization from the Matern to the widely-used squared ® exponential Gaussian process. By allowing Riemannian Matern Gaussian pro- ® cesses to be trained using well-understood techniques, our work enables their use in mini-batch, online, and non-conjugate settings, and makes them more accessible to machine learning practitioners

    Efficiently sampling functions from Gaussian process posteriors

    Get PDF
    Gaussian processes are the gold standard for many real-world modeling problems, especially in cases where a model’s success hinges upon its ability to faithfully represent predictive uncertainty. These problems typically exist as parts of larger frameworks, wherein quantities of interest are ultimately defined by integrating over posterior distributions. These quantities are frequently intractable, motivating the use of Monte Carlo methods. Despite substantial progress in scaling up Gaussian processes to large training sets, methods for accurately generating draws from their posterior distributions still scale cubically in the number of test locations. We identify a decomposition of Gaussian processes that naturally lends itself to scalable sampling by separating out the prior from the data. Building off of this factorization, we propose an easy-to-use and general-purpose approach for fast posterior sampling, which seamlessly pairs with sparse approximations to afford scalability both during training and at test time. In a series of experiments designed to test competing sampling schemes’ statistical properties and practical ramifications, we demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost

    Pathwise Conditioning of Gaussian Processes

    No full text
    As Gaussian processes are used to answer increasingly complex questions, analytic solutions become scarcer and scarcer. Monte Carlo methods act as a convenient bridge for connecting intractable mathematical expressions with actionable estimates via sampling. Conventional approaches for simulating Gaussian process posteriors view samples as draws from marginal distributions of process values at finite sets of input locations. This distribution-centric characterization leads to generative strategies that scale cubically in the size of the desired random vector. These methods are prohibitively expensive in cases where we would, ideally, like to draw high-dimensional vectors or even continuous sample paths. In this work, we investigate a different line of reasoning: rather than focusing on distributions, we articulate Gaussian conditionals at the level of random variables. We show how this pathwise interpretation of conditioning gives rise to a general family of approximations that lend themselves to efficiently sampling Gaussian process posteriors. Starting from first principles, we derive these methods and analyze the approximation errors they introduce. We, then, ground these results by exploring the practical implications of pathwise conditioning in various applied settings, such as global optimization and reinforcement learning
    corecore