15 research outputs found

    Structured inference and sequential decision-making with Gaussian processes

    Get PDF
    Sequential decision-making is a central ability of intelligent agents interacting with an environment, including humans, animals, and animats. When those agents operate in complex systems, they need to be endowed with automatic decision-making frameworks quantifying the system uncertainty and the utility of different actions while allowing them to sequentially update their beliefs about the environment. When agents also aim at manipulating a system, they need to understand the data-generating mechanism. This requires accounting for causality which allows evaluating counterfactual scenarios while increasing interpretability and generalizability of an algorithm. Sequential causal decision making algorithms require an accurate surrogate model for the causal system and an acquisition function that based on its properties allows selecting actions. In this thesis, I tackle both components through the Bayesian framework which enables probabilistic reasoning while handling uncertainty in a principled manner. I consider Gaussian process (gp) models for both inference and causal decision-making as they provide a flexible framework capable of capturing a variety of data distributions. I first focus on developing scalable gp models incorporating structure in the likelihood and accounting for complex dependencies in the posteriors. These are indeed crucial properties of surrogate models used within decision-making algorithms. Particularly, I investigate models for point data as many realworld problems involve events and they present significant computational and methodological challenges. I then study how such models can incorporate causal structure and can be used to select actions based on cause-effect relationships. I focus on multi-task gp models, Bayesian Optimization, and Active Learning and show how they can be generalized to capture causality

    Functional Causal Bayesian Optimization

    Full text link
    We propose functional causal Bayesian optimization (fCBO), a method for finding interventions that optimize a target variable in a known causal graph. fCBO extends the CBO family of methods to enable functional interventions, which set a variable to be a deterministic function of other variables in the graph. fCBO models the unknown objectives with Gaussian processes whose inputs are defined in a reproducing kernel Hilbert space, thus allowing to compute distances among vector-valued functions. In turn, this enables to sequentially select functions to explore by maximizing an expected improvement acquisition functional while keeping the typical computational tractability of standard BO settings. We introduce graphical criteria that establish when considering functional interventions allows attaining better target effects, and conditions under which selected interventions are also optimal for conditional target effects. We demonstrate the benefits of the method in a synthetic and in a real-world causal graph

    On the competitive facility location problem with a Bayesian spatial interaction model

    Get PDF
    The competitive facility location problem arises when businesses plan to enter a new market or expand their presence. We introduce a Bayesian spatial interaction model which provides probabilistic estimates on location-specific revenues and then formulate a mathematical framework to simultaneously identify the location and design of new facilities that maximise revenue. To solve the allocation optimisation problem, we develop a hierarchical search algorithm and associated sampling techniques that explore geographic regions of varying spatial resolution. We demonstrate the approach by producing optimal facility locations and corresponding designs for two large-scale applications in the supermarket and pub sectors of Greater London

    Dynamic causal Bayesian optimisation

    Get PDF
    This paper studies the problem of performing a sequence of optimal interventions in a causal dynamical system where both the target variable of interest and the inputs evolve over time. This problem arises in a variety of domains e.g. system biology and operational research. Dynamic Causal Bayesian Optimization (DCBO) brings together ideas from sequential decision making, causal inference and Gaussian process (GP) emulation. DCBO is useful in scenarios where all causal effects in a graph are changing over time. At every time step DCBO identifies a local optimal intervention by integrating both observational and past interventional data collected from the system. We give theoretical results detailing how one can transfer interventional information across time steps and define a dynamic causal GP model which can be used to quantify uncertainty and find optimal interventions in practice. We demonstrate how DCBO identifies optimal interventions faster than competing approaches in multiple settings and applications

    Efficient inference in multi-task Cox process models

    Get PDF
    We generalize the log Gaussian Cox process (LGCP) framework to model multiple correlated point data jointly. The observations are treated as realizations of multiple LGCPs, whose log intensities are given by linear combinations of latent functions drawn from Gaussian process priors. The combination coefficients are also drawn from Gaussian processes and can incorporate additional dependencies. We derive closed-form expressions for the moments of the intensity functions and develop an efficient variational inference algorithm that is orders of magnitude faster than competing deterministic and stochastic approximations of multivariate LGCPs, coregionalization models, and multi-task permanental processes. Our approach outperforms these benchmarks in multiple problems, offering the current state of the art in modeling multivariate point processes

    Structured variational inference in continuous Cox Process Models

    No full text
    We propose a scalable framework for inference in a continuous sigmoidal Cox process that assumes the corresponding intensity function is given by a Gaussian process (GP) prior transformed with a scaled logistic sigmoid function. We present a tractable representation of the likelihood through augmentation with a superposition of Poisson processes. This view enables a structured variational approximation capturing dependencies across variables in the model. Our framework avoids discretization of the domain, does not require accurate numerical integration over the input space and is not limited to GPs with squared exponential kernels. We evaluate our approach on synthetic and real-world data showing that its benefits are particularly pronounced on multivariate input settings where it overcomes the limitations of mean-field methods and sampling schemes. We provide the state of-the-art in terms of speed, accuracy and uncertainty quantification trade-offs