1,066 research outputs found

    Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks

    Get PDF
    Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made both neurobiologically more plausible and computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a 'recognizing RNN' (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, for example, fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of recurrent neural networks may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics

    Inverse problems and uncertainty quantification

    Get PDF
    In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) model - are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time-consuming and slowly convergent Monte Carlo sampling. The developed sampling-free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisation to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.Comment: 25 pages, 17 figures. arXiv admin note: text overlap with arXiv:1201.404

    Ensemble updating of binary state vectors by maximising the expected number of unchanged components

    Full text link
    In recent years, several ensemble-based filtering methods have been proposed and studied. The main challenge in such procedures is the updating of a prior ensemble to a posterior ensemble at every step of the filtering recursions. In the famous ensemble Kalman filter, the assumption of a linear-Gaussian state space model is introduced in order to overcome this issue, and the prior ensemble is updated with a linear shift closely related to the traditional Kalman filter equations. In the current article, we consider how the ideas underlying the ensemble Kalman filter can be applied when the components of the state vectors are binary variables. While the ensemble Kalman filter relies on Gaussian approximations of the forecast and filtering distributions, we instead use first order Markov chains. To update the prior ensemble, we simulate samples from a distribution constructed such that the expected number of equal components in a prior and posterior state vector is maximised. We demonstrate the performance of our approach in a simulation example inspired by the movement of oil and water in a petroleum reservoir, where also a more na\"{i}ve updating approach is applied for comparison. Here, we observe that the Frobenius norm of the difference between the estimated and the true marginal filtering probabilities is reduced to the half with our method compared to the na\"{i}ve approach, indicating that our method is superior. Finally, we discuss how our methodology can be generalised from the binary setting to more complicated situations

    Inverse Problems in a Bayesian Setting

    Full text link
    In a Bayesian setting, inverse problems and uncertainty quantification (UQ) --- the propagation of uncertainty through a computational (forward) model --- are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. We give a detailed account of this approach via conditional approximation, various approximations, and the construction of filters. Together with a functional or spectral approach for the forward UQ there is no need for time-consuming and slowly convergent Monte Carlo sampling. The developed sampling-free non-linear Bayesian update in form of a filter is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisation to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and nonlinear Bayesian update in form of a filter on some examples.Comment: arXiv admin note: substantial text overlap with arXiv:1312.504

    Parameter Identification in a Probabilistic Setting

    Get PDF
    Parameter identification problems are formulated in a probabilistic language, where the randomness reflects the uncertainty about the knowledge of the true values. This setting allows conceptually easily to incorporate new information, e.g. through a measurement, by connecting it to Bayes's theorem. The unknown quantity is modelled as a (may be high-dimensional) random variable. Such a description has two constituents, the measurable function and the measure. One group of methods is identified as updating the measure, the other group changes the measurable function. We connect both groups with the relatively recent methods of functional approximation of stochastic problems, and introduce especially in combination with the second group of methods a new procedure which does not need any sampling, hence works completely deterministically. It also seems to be the fastest and more reliable when compared with other methods. We show by example that it also works for highly nonlinear non-smooth problems with non-Gaussian measures.Comment: 29 pages, 16 figure
    corecore