626 research outputs found

    Inferring Latent States and Refining Force Estimates via Hierarchical Dirichlet Process Modeling in Single Particle Tracking Experiments

    Get PDF
    Optical microscopy provides rich spatio-temporal information characterizing in vivo molecular motion. However, effective forces and other parameters used to summarize molecular motion change over time in live cells due to latent state changes, e.g., changes induced by dynamic micro-environments, photobleaching, and other heterogeneity inherent in biological processes. This study focuses on techniques for analyzing Single Particle Tracking (SPT) data experiencing abrupt state changes. We demonstrate the approach on GFP tagged chromatids experiencing metaphase in yeast cells and probe the effective forces resulting from dynamic interactions that reflect the sum of a number of physical phenomena. State changes are induced by factors such as microtubule dynamics exerting force through the centromere, thermal polymer fluctuations, etc. Simulations are used to demonstrate the relevance of the approach in more general SPT data analyses. Refined force estimates are obtained by adopting and modifying a nonparametric Bayesian modeling technique, the Hierarchical Dirichlet Process Switching Linear Dynamical System (HDP-SLDS), for SPT applications. The HDP-SLDS method shows promise in systematically identifying dynamical regime changes induced by unobserved state changes when the number of underlying states is unknown in advance (a common problem in SPT applications). We expand on the relevance of the HDP-SLDS approach, review the relevant background of Hierarchical Dirichlet Processes, show how to map discrete time HDP-SLDS models to classic SPT models, and discuss limitations of the approach. In addition, we demonstrate new computational techniques for tuning hyperparameters and for checking the statistical consistency of model assumptions directly against individual experimental trajectories; the techniques circumvent the need for "ground-truth" and subjective information.Comment: 25 pages, 6 figures. Differs only typographically from PLoS One publication available freely as an open-access article at http://journals.plos.org/plosone/article?id=10.1371/journal.pone.013763

    Inferring hidden states in Langevin dynamics on large networks: Average case performance

    Get PDF
    We present average performance results for dynamical inference problems in large networks, where a set of nodes is hidden while the time trajectories of the others are observed. Examples of this scenario can occur in signal transduction and gene regulation networks. We focus on the linear stochastic dynamics of continuous variables interacting via random Gaussian couplings of generic symmetry. We analyze the inference error, given by the variance of the posterior distribution over hidden paths, in the thermodynamic limit and as a function of the system parameters and the ratio {\alpha} between the number of hidden and observed nodes. By applying Kalman filter recursions we find that the posterior dynamics is governed by an "effective" drift that incorporates the effect of the observations. We present two approaches for characterizing the posterior variance that allow us to tackle, respectively, equilibrium and nonequilibrium dynamics. The first appeals to Random Matrix Theory and reveals average spectral properties of the inference error and typical posterior relaxation times, the second is based on dynamical functionals and yields the inference error as the solution of an algebraic equation.Comment: 20 pages, 5 figure

    Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks

    Full text link
    Stochastic gradient descent (SGD) is widely believed to perform implicit regularization when used to train deep neural networks, but the precise manner in which this occurs has thus far been elusive. We prove that SGD minimizes an average potential over the posterior distribution of weights along with an entropic regularization term. This potential is however not the original loss function in general. So SGD does perform variational inference, but for a different loss than the one used to compute the gradients. Even more surprisingly, SGD does not even converge in the classical sense: we show that the most likely trajectories of SGD for deep networks do not behave like Brownian motion around critical points. Instead, they resemble closed loops with deterministic components. We prove that such "out-of-equilibrium" behavior is a consequence of highly non-isotropic gradient noise in SGD; the covariance matrix of mini-batch gradients for deep networks has a rank as small as 1% of its dimension. We provide extensive empirical validation of these claims, proven in the appendix

    NySALT: Nystr\"{o}m-type inference-based schemes adaptive to large time-stepping

    Full text link
    Large time-stepping is important for efficient long-time simulations of deterministic and stochastic Hamiltonian dynamical systems. Conventional structure-preserving integrators, while being successful for generic systems, have limited tolerance to time step size due to stability and accuracy constraints. We propose to use data to innovate classical integrators so that they can be adaptive to large time-stepping and are tailored to each specific system. In particular, we introduce NySALT, Nystr\"{o}m-type inference-based schemes adaptive to large time-stepping. The NySALT has optimal parameters for each time step learnt from data by minimizing the one-step prediction error. Thus, it is tailored for each time step size and the specific system to achieve optimal performance and tolerate large time-stepping in an adaptive fashion. We prove and numerically verify the convergence of the estimators as data size increases. Furthermore, analysis and numerical tests on the deterministic and stochastic Fermi-Pasta-Ulam (FPU) models show that NySALT enlarges the maximal admissible step size of linear stability, and quadruples the time step size of the St\"{o}rmer--Verlet and the BAOAB when maintaining similar levels of accuracy.Comment: 26 pages, 7 figure

    Probing reaction channels via reinforcement learning

    Full text link
    We propose a reinforcement learning based method to identify important configurations that connect reactant and product states along chemical reaction paths. By shooting multiple trajectories from these configurations, we can generate an ensemble of configurations that concentrate on the transition path ensemble. This configuration ensemble can be effectively employed in a neural network-based partial differential equation solver to obtain an approximation solution of a restricted Backward Kolmogorov equation, even when the dimension of the problem is very high. The resulting solution, known as the committor function, encodes mechanistic information for the reaction and can in turn be used to evaluate reaction rates
    • …
    corecore