4,982 research outputs found

    The Dirichlet Markov Ensemble

    Get PDF
    We equip the polytope of n×nn\times n Markov matrices with the normalized trace of the Lebesgue measure of Rn2\mathbb{R}^{n^2}. This probability space provides random Markov matrices, with i.i.d. rows following the Dirichlet distribution of mean (1/n,...,1/n)(1/n,...,1/n). We show that if \bM is such a random matrix, then the empirical distribution built from the singular values of\sqrt{n} \bM tends as n→∞n\to\infty to a Wigner quarter--circle distribution. Some computer simulations reveal striking asymptotic spectral properties of such random matrices, still waiting for a rigorous mathematical analysis. In particular, we believe that with probability one, the empirical distribution of the complex spectrum of \sqrt{n} \bM tends as n→∞n\to\infty to the uniform distribution on the unit disc of the complex plane, and that moreover, the spectral gap of \bM is of order 1−1/n1-1/\sqrt{n} when nn is large.Comment: Improved version. Accepted for publication in JMV

    A Nonparametric Bayesian Approach to Uncovering Rat Hippocampal Population Codes During Spatial Navigation

    Get PDF
    Rodent hippocampal population codes represent important spatial information about the environment during navigation. Several computational methods have been developed to uncover the neural representation of spatial topology embedded in rodent hippocampal ensemble spike activity. Here we extend our previous work and propose a nonparametric Bayesian approach to infer rat hippocampal population codes during spatial navigation. To tackle the model selection problem, we leverage a nonparametric Bayesian model. Specifically, to analyze rat hippocampal ensemble spiking activity, we apply a hierarchical Dirichlet process-hidden Markov model (HDP-HMM) using two Bayesian inference methods, one based on Markov chain Monte Carlo (MCMC) and the other based on variational Bayes (VB). We demonstrate the effectiveness of our Bayesian approaches on recordings from a freely-behaving rat navigating in an open field environment. We find that MCMC-based inference with Hamiltonian Monte Carlo (HMC) hyperparameter sampling is flexible and efficient, and outperforms VB and MCMC approaches with hyperparameters set by empirical Bayes

    Hierarchically-coupled hidden Markov models for learning kinetic rates from single-molecule data

    Full text link
    We address the problem of analyzing sets of noisy time-varying signals that all report on the same process but confound straightforward analyses due to complex inter-signal heterogeneities and measurement artifacts. In particular we consider single-molecule experiments which indirectly measure the distinct steps in a biomolecular process via observations of noisy time-dependent signals such as a fluorescence intensity or bead position. Straightforward hidden Markov model (HMM) analyses attempt to characterize such processes in terms of a set of conformational states, the transitions that can occur between these states, and the associated rates at which those transitions occur; but require ad-hoc post-processing steps to combine multiple signals. Here we develop a hierarchically coupled HMM that allows experimentalists to deal with inter-signal variability in a principled and automatic way. Our approach is a generalized expectation maximization hyperparameter point estimation procedure with variational Bayes at the level of individual time series that learns an single interpretable representation of the overall data generating process.Comment: 9 pages, 5 figure

    The ensemble of random Markov matrices

    Full text link
    The ensemble of random Markov matrices is introduced as a set of Markov or stochastic matrices with the maximal Shannon entropy. The statistical properties of the stationary distribution pi, the average entropy growth rate hh and the second largest eigenvalue nu across the ensemble are studied. It is shown and heuristically proven that the entropy growth-rate and second largest eigenvalue of Markov matrices scale in average with dimension of matrices d as h ~ log(O(d)) and nu ~ d^(-1/2), respectively, yielding the asymptotic relation h tau_c ~ 1/2 between entropy h and correlation decay time tau_c = -1/log|nu| . Additionally, the correlation between h and and tau_c is analysed and is decreasing with increasing dimension d.Comment: 12 pages, 6 figur

    The bead process for beta ensembles

    Full text link
    The bead process introduced by Boutillier is a countable interlacing of the determinantal sine-kernel point processes. We construct the bead process for general sine beta processes as an infinite dimensional Markov chain whose transition mechanism is explicitly described. We show that this process is the microscopic scaling limit in the bulk of the Hermite beta corner process introduced by Gorin and Shkolnikov, generalizing the process of the minors of the Gaussian unitary and orthogonal ensembles. In order to prove our results, we use bounds on the variance of the point counting of the circular and the Gaussian beta ensembles, proven in a companion paper

    Cruising The Simplex: Hamiltonian Monte Carlo and the Dirichlet Distribution

    Full text link
    Due to its constrained support, the Dirichlet distribution is uniquely suited to many applications. The constraints that make it powerful, however, can also hinder practical implementations, particularly those utilizing Markov Chain Monte Carlo (MCMC) techniques such as Hamiltonian Monte Carlo. I demonstrate a series of transformations that reshape the canonical Dirichlet distribution into a form much more amenable to MCMC algorithms.Comment: 5 pages, 0 figure
    • …
    corecore