25 research outputs found

    On Tensors, Sparsity, and Nonnegative Factorizations

    Full text link
    Tensors have found application in a variety of fields, ranging from chemometrics to signal processing and beyond. In this paper, we consider the problem of multilinear modeling of sparse count data. Our goal is to develop a descriptive tensor factorization model of such data, along with appropriate algorithms and theory. To do so, we propose that the random variation is best described via a Poisson distribution, which better describes the zeros observed in the data as compared to the typical assumption of a Gaussian distribution. Under a Poisson assumption, we fit a model to observed data using the negative log-likelihood score. We present a new algorithm for Poisson tensor factorization called CANDECOMP-PARAFAC Alternating Poisson Regression (CP-APR) that is based on a majorization-minimization approach. It can be shown that CP-APR is a generalization of the Lee-Seung multiplicative updates. We show how to prevent the algorithm from converging to non-KKT points and prove convergence of CP-APR under mild conditions. We also explain how to implement CP-APR for large-scale sparse tensors and present results on several data sets, both real and simulated

    Decomposition and classification of electroencephalography data

    Get PDF

    On consciousness, resting state fMRI, and neurodynamics

    Get PDF

    27th Annual Computational Neuroscience Meeting (CNS*2018): Part One

    Get PDF

    A latent variable modeling framework for analyzing neural population activity

    Get PDF
    Neuroscience is entering the age of big data, due to technological advances in electrical and optical recording techniques. Where historically neuroscientists have only been able to record activity from single neurons at a time, recent advances allow the measurement of activity from multiple neurons simultaneously. In fact, this advancement follows a Moore’s Law-style trend, where the number of simultaneously recorded neurons more than doubles every seven years, and it is now common to see simultaneous recordings from hundreds and even thousands of neurons. The consequences of this data revolution for our understanding of brain struc- ture and function cannot be understated. Not only is there opportunity to address old questions in new ways, but more importantly these experimental techniques will allow neuroscientists to address new questions entirely. However, addressing these questions successfully requires the development of a wide range of new data anal- ysis tools. Many of these tools will draw on recent advances in machine learning and statistics, and in particular there has been a push to develop methods that can accurately model the statistical structure of high-dimensional neural activity. In this dissertation I develop a latent variable modeling framework for analyz- ing such high-dimensional neural data. First, I demonstrate how this framework can be used in an unsupervised fashion as an exploratory tool for large datasets. Next, I extend this framework to incorporate nonlinearities in two distinct ways, and show that the resulting models far outperform standard linear models at capturing the structure of neural activity. Finally, I use this framework to develop a new algorithm for decoding neural activity, and use this as a tool to address questions about how information is represented in populations of neurons

    29th Annual Computational Neuroscience Meeting: CNS*2020

    Get PDF
    Meeting abstracts This publication was funded by OCNS. The Supplement Editors declare that they have no competing interests. Virtual | 18-22 July 202
    corecore