319 research outputs found

    Deterministic Mean-field Ensemble Kalman Filtering

    Full text link
    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Legland etal. (2011) is extended to non-Gaussian state space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ\kappa between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF when the dimension d<2κd<2\kappa. The fidelity of approximation of the true distribution is also established using an extension of total variation metric to random measures. This is limited by a Gaussian bias term arising from non-linearity/non-Gaussianity of the model, which exists for both DMFEnKF and standard EnKF. Numerical results support and extend the theory

    Rank Bounds for Approximating Gaussian Densities in the Tensor-Train Format

    Get PDF

    Nonlinear Filtering based on Log-homotopy Particle Flow : Methodological Clarification and Numerical Evaluation

    Get PDF
    The state estimation of dynamical systems based on measurements is an ubiquitous problem. This is relevant in applications like robotics, industrial manufacturing, computer vision, target tracking etc. Recursive Bayesian methodology can then be used to estimate the hidden states of a dynamical system. The procedure consists of two steps: a process update based on solving the equations modelling the state evolution, and a measurement update in which the prior knowledge about the system is improved based on the measurements. For most real world systems, both the evolution and the measurement models are nonlinear functions of the system states. Additionally, both models can also be perturbed by random noise sources, which could be non-Gaussian in their nature. Unlike linear Gaussian models, there does not exist any optimal estimation scheme for nonlinear/non-Gaussian scenarios. This thesis investigates a particular method for nonlinear and non-Gaussian data assimilation, termed as the log-homotopy based particle flow. Practical filters based on such flows have been known in the literature as Daum Huang filters (DHF), named after the developers. The key concept behind such filters is the gradual inclusion of measurements to counter a major drawback of single step update schemes like the particle filters i.e. namely the degeneracy. This could refer to a situation where the likelihood function has its probability mass well seperated from the prior density, and/or is peaked in comparison. Conventional sampling or grid based techniques do not perform well under such circumstances and in order to achieve a reasonable accuracy, could incur a high processing cost. DHF is a sampling based scheme, which provides a unique way to tackle this challenge thereby lowering the processing cost. This is achieved by dividing the single measurement update step into multiple sub steps, such that particles originating from their prior locations are graduated incrementally until they reach their final locations. The motion is controlled by a differential equation, which is numerically solved to yield the updated states. DH filters, even though not new in the literature, have not been fully explored in the detail yet. They lack the in-depth analysis that the other contemporary filters have gone through. Especially, the implementation details for the DHF are very application specific. In this work, we have pursued four main objectives. The first objective is the exploration of theoretical concepts behind DHF. Secondly, we build an understanding of the existing implementation framework and highlight its potential shortcomings. As a sub task to this, we carry out a detailed study of important factors that affect the performance of a DHF, and suggest possible improvements for each of those factors. The third objective is to use the improved implementation to derive new filtering algorithms. Finally, we have extended the DHF theory and derived new flow equations and filters to cater for more general scenarios. Improvements in the implementation architecture of a standard DHF is one of the key contributions of this thesis. The scope of the applicability of DHF is expanded by combining it with other schemes like the Sequential Markov chain Monte Carlo and the tensor decomposition based solution of the Fokker Planck equation, resulting in the development of new nonlinear filtering algorithms. The standard DHF, using improved implementation and the newly derived algorithms are tested in challenging simulated test scenarios. Detailed analysis have been carried out, together with the comparison against more established filtering schemes. Estimation error and the processing time are used as important performance parameters. We show that our new filtering algorithms exhibit marked performance improvements over the traditional schemes

    Morphogenesis as Bayesian inference: A variational approach to pattern formation and control in complex biological systems

    Get PDF
    Recent advances in molecular biology such as gene editing [1], bioelectric recording and manipulation [2] and live cell microscopy using fluorescent reporters [3], [4] – especially with the advent of light-controlled protein activation through optogenetics [5] – have provided the tools to measure and manipulate molecular signaling pathways with unprecedented spatiotemporal precision. This has produced ever increasing detail about the molecular mechanisms underlying development and regeneration in biological organisms. However, an overarching concept – that can predict the emergence of form and the robust maintenance of complex anatomy – is largely missing in the field. Classic (i.e., dynamic systems and analytical mechanics) approaches such as least action principles are difficult to use when characterizing open, far-from equilibrium systems that predominate in Biology. Similar issues arise in neuroscience when trying to understand neuronal dynamics from first principles. In this (neurobiology) setting, a variational free energy principle has emerged based upon a formulation of self-organization in terms of (active) Bayesian inference. The free energy principle has recently been applied to biological self-organization beyond the neurosciences [6], [7]. For biological processes that underwrite development or regeneration, the Bayesian inference framework treats cells as information processing agents, where the driving force behind morphogenesis is the maximization of a cell's model evidence. This is realized by the appropriate expression of receptors and other signals that correspond to the cell's internal (i.e., generative) model of what type of receptors and other signals it should express. The emerging field of the free energy principle in pattern formation provides an essential quantitative formalism for understanding cellular decision-making in the context of embryogenesis, regeneration, and cancer suppression. In this paper, we derive the mathematics behind Bayesian inference – as understood in this framework – and use simulations to show that the formalism can reproduce experimental, top-down manipulations of complex morphogenesis. First, we illustrate this ‘first principle’ approach to morphogenesis through simulated alterations of anterior-posterior axial polarity (i.e., the induction of two heads or two tails) as in planarian regeneration. Then, we consider aberrant signaling and functional behavior of a single cell within a cellular ensemble – as a first step in carcinogenesis as false ‘beliefs’ about what a cell should ‘sense’ and ‘do’. We further show that simple modifications of the inference process can cause – and rescue – mis-patterning of developmental and regenerative events without changing the implicit generative model of a cell as specified, for example, by its DNA. This formalism offers a new road map for understanding developmental change in evolution and for designing new interventions in regenerative medicine settings

    Using the Sharp Operator for edge detection and nonlinear diffusion

    Get PDF
    In this paper we investigate the use of the sharp function known from functional analysis in image processing. The sharp function gives a measure of the variations of a function and can be used as an edge detector. We extend the classical notion of the sharp function for measuring anisotropic behaviour and give a fast anisotropic edge detection variant inspired by the sharp function. We show that these edge detection results are useful to steer isotropic and anisotropic nonlinear diffusion filters for image enhancement

    Cognitive Dynamics: From Attractors to Active Inference

    Get PDF
    • …
    corecore