160 research outputs found
Recommended from our members
Statistical Machine Learning Methods for High-dimensional Neural Population Data Analysis
Advances in techniques have been producing increasingly complex neural recordings, posing significant challenges for data analysis. This thesis discusses novel statistical methods for analyzing high-dimensional neural data. Part one discusses two extensions of state space models tailored to neural data analysis. First, we propose using a flexible count data distribution family in the observation model to faithfully capture over-dispersion and under-dispersion of the neural observations. Second, we incorporate nonlinear observation models into state space models to improve the flexibility of the model and get a more concise representation of the data. For both extensions, novel variational inference techniques are developed for model fitting, and simulated and real experiments show the advantages of our extensions. Part two discusses a fast region of interest (ROI) detection method for large-scale calcium imaging data based on structured matrix factorization. Part three discusses a method for sampling from a maximum entropy distribution with complicated constraints, which is useful for hypothesis testing for neural data analysis and many other applications related to maximum entropy formulation. We conclude the thesis with discussions and future works
Linear Time GPs for Inferring Latent Trajectories from Neural Spike Trains
Latent Gaussian process (GP) models are widely used in neuroscience to
uncover hidden state evolutions from sequential observations, mainly in neural
activity recordings. While latent GP models provide a principled and powerful
solution in theory, the intractable posterior in non-conjugate settings
necessitates approximate inference schemes, which may lack scalability. In this
work, we propose cvHM, a general inference framework for latent GP models
leveraging Hida-Mat\'ern kernels and conjugate computation variational
inference (CVI). With cvHM, we are able to perform variational inference of
latent neural trajectories with linear time complexity for arbitrary
likelihoods. The reparameterization of stationary kernels using Hida-Mat\'ern
GPs helps us connect the latent variable models that encode prior assumptions
through dynamical systems to those that encode trajectory assumptions through
GPs. In contrast to previous work, we use bidirectional information filtering,
leading to a more concise implementation. Furthermore, we employ the Whittle
approximate likelihood to achieve highly efficient hyperparameter learning.Comment: Published at ICML 202
- …