54,089 research outputs found

    Probabilistic Inference from Arbitrary Uncertainty using Mixtures of Factorized Generalized Gaussians

    Full text link
    This paper presents a general and efficient framework for probabilistic inference and learning from arbitrary uncertain information. It exploits the calculation properties of finite mixture models, conjugate families and factorization. Both the joint probability density of the variables and the likelihood function of the (objective or subjective) observation are approximated by a special mixture model, in such a way that any desired conditional distribution can be directly obtained without numerical integration. We have developed an extended version of the expectation maximization (EM) algorithm to estimate the parameters of mixture models from uncertain training examples (indirect observations). As a consequence, any piece of exact or uncertain information about both input and output values is consistently handled in the inference and learning stages. This ability, extremely useful in certain situations, is not found in most alternative methods. The proposed framework is formally justified from standard probabilistic principles and illustrative examples are provided in the fields of nonparametric pattern classification, nonlinear regression and pattern completion. Finally, experiments on a real application and comparative results over standard databases provide empirical evidence of the utility of the method in a wide range of applications

    Probabilistic Methodology and Techniques for Artefact Conception and Development

    Get PDF
    The purpose of this paper is to make a state of the art on probabilistic methodology and techniques for artefact conception and development. It is the 8th deliverable of the BIBA (Bayesian Inspired Brain and Artefacts) project. We first present the incompletness problem as the central difficulty that both living creatures and artefacts have to face: how can they perceive, infer, decide and act efficiently with incomplete and uncertain knowledge?. We then introduce a generic probabilistic formalism called Bayesian Programming. This formalism is then used to review the main probabilistic methodology and techniques. This review is organized in 3 parts: first the probabilistic models from Bayesian networks to Kalman filters and from sensor fusion to CAD systems, second the inference techniques and finally the learning and model acquisition and comparison methodologies. We conclude with the perspectives of the BIBA project as they rise from this state of the art

    State-Space Inference and Learning with Gaussian Processes

    No full text
    State-space inference and learning with Gaussian processes (GPs) is an unsolved problem. We propose a new, general methodology for inference and learning in nonlinear state-space models that are described probabilistically by non-parametric GP models. We apply the expectation maximization algorithm to iterate between inference in the latent state-space and learning the parameters of the underlying GP dynamics model. Copyright 2010 by the authors

    Bayesian robot Programming

    Get PDF
    We propose a new method to program robots based on Bayesian inference and learning. The capacities of this programming method are demonstrated through a succession of increasingly complex experiments. Starting from the learning of simple reactive behaviors, we present instances of behavior combinations, sensor fusion, hierarchical behavior composition, situation recognition and temporal sequencing. This series of experiments comprises the steps in the incremental development of a complex robot program. The advantages and drawbacks of this approach are discussed along with these different experiments and summed up as a conclusion. These different robotics programs may be seen as an illustration of probabilistic programming applicable whenever one must deal with problems based on uncertain or incomplete knowledge. The scope of possible applications is obviously much broader than robotics

    Fits, and especially linear fits, with errors on both axes, extra variance of the data points and other complications

    Full text link
    The aim of this paper, triggered by some discussions in the astrophysics community raised by astro-ph/0508529, is to introduce the issue of `fits' from a probabilistic perspective (also known as Bayesian), with special attention to the construction of model that describes the `network of dependences' (a Bayesian network) that connects experimental observations to model parameters and upon which the probabilistic inference relies. The particular case of linear fit with errors on both axes and extra variance of the data points around the straight line (i.e. not accounted by the experimental errors) is shown in detail. Some questions related to the use of linear fit formulas to log-linearized exponential and power laws are also sketched, as well as the issue of systematic errors.Comment: 20 pages, 4 figures, hyperlinked bibliography in pdf versio

    Multi-scale uncertainty quantification in geostatistical seismic inversion

    Full text link
    Geostatistical seismic inversion is commonly used to infer the spatial distribution of the subsurface petro-elastic properties by perturbing the model parameter space through iterative stochastic sequential simulations/co-simulations. The spatial uncertainty of the inferred petro-elastic properties is represented with the updated a posteriori variance from an ensemble of the simulated realizations. Within this setting, the large-scale geological (metaparameters) used to generate the petro-elastic realizations, such as the spatial correlation model and the global a priori distribution of the properties of interest, are assumed to be known and stationary for the entire inversion domain. This assumption leads to underestimation of the uncertainty associated with the inverted models. We propose a practical framework to quantify uncertainty of the large-scale geological parameters in seismic inversion. The framework couples geostatistical seismic inversion with a stochastic adaptive sampling and Bayesian inference of the metaparameters to provide a more accurate and realistic prediction of uncertainty not restricted by heavy assumptions on large-scale geological parameters. The proposed framework is illustrated with both synthetic and real case studies. The results show the ability retrieve more reliable acoustic impedance models with a more adequate uncertainty spread when compared with conventional geostatistical seismic inversion techniques. The proposed approach separately account for geological uncertainty at large-scale (metaparameters) and local scale (trace-by-trace inversion)
    corecore