20 research outputs found

    Single-Trial Decoding of Bistable Perception Based on Sparse Nonnegative Tensor Decomposition

    Get PDF
    The study of the neuronal correlates of the spontaneous alternation in perception elicited by bistable visual stimuli is promising for understanding the mechanism of neural information processing and the neural basis of visual perception and perceptual decision-making. In this paper, we develop a sparse nonnegative tensor factorization-(NTF)-based method to extract features from the local field potential (LFP), collected from the middle temporal (MT) visual cortex in a macaque monkey, for decoding its bistable structure-from-motion (SFM) perception. We apply the feature extraction approach to the multichannel time-frequency representation of the intracortical LFP data. The advantages of the sparse NTF-based feature extraction approach lies in its capability to yield components common across the space, time, and frequency domains yet discriminative across different conditions without prior knowledge of the discriminating frequency bands and temporal windows for a specific subject. We employ the support vector machines (SVMs) classifier based on the features of the NTF components for single-trial decoding the reported perception. Our results suggest that although other bands also have certain discriminability, the gamma band feature carries the most discriminative information for bistable perception, and that imposing the sparseness constraints on the nonnegative tensor factorization improves extraction of this feature

    On Tensors, Sparsity, and Nonnegative Factorizations

    Full text link
    Tensors have found application in a variety of fields, ranging from chemometrics to signal processing and beyond. In this paper, we consider the problem of multilinear modeling of sparse count data. Our goal is to develop a descriptive tensor factorization model of such data, along with appropriate algorithms and theory. To do so, we propose that the random variation is best described via a Poisson distribution, which better describes the zeros observed in the data as compared to the typical assumption of a Gaussian distribution. Under a Poisson assumption, we fit a model to observed data using the negative log-likelihood score. We present a new algorithm for Poisson tensor factorization called CANDECOMP-PARAFAC Alternating Poisson Regression (CP-APR) that is based on a majorization-minimization approach. It can be shown that CP-APR is a generalization of the Lee-Seung multiplicative updates. We show how to prevent the algorithm from converging to non-KKT points and prove convergence of CP-APR under mild conditions. We also explain how to implement CP-APR for large-scale sparse tensors and present results on several data sets, both real and simulated

    On consciousness, resting state fMRI, and neurodynamics

    Get PDF

    A latent variable modeling framework for analyzing neural population activity

    Get PDF
    Neuroscience is entering the age of big data, due to technological advances in electrical and optical recording techniques. Where historically neuroscientists have only been able to record activity from single neurons at a time, recent advances allow the measurement of activity from multiple neurons simultaneously. In fact, this advancement follows a Moore’s Law-style trend, where the number of simultaneously recorded neurons more than doubles every seven years, and it is now common to see simultaneous recordings from hundreds and even thousands of neurons. The consequences of this data revolution for our understanding of brain struc- ture and function cannot be understated. Not only is there opportunity to address old questions in new ways, but more importantly these experimental techniques will allow neuroscientists to address new questions entirely. However, addressing these questions successfully requires the development of a wide range of new data anal- ysis tools. Many of these tools will draw on recent advances in machine learning and statistics, and in particular there has been a push to develop methods that can accurately model the statistical structure of high-dimensional neural activity. In this dissertation I develop a latent variable modeling framework for analyz- ing such high-dimensional neural data. First, I demonstrate how this framework can be used in an unsupervised fashion as an exploratory tool for large datasets. Next, I extend this framework to incorporate nonlinearities in two distinct ways, and show that the resulting models far outperform standard linear models at capturing the structure of neural activity. Finally, I use this framework to develop a new algorithm for decoding neural activity, and use this as a tool to address questions about how information is represented in populations of neurons

    Decomposition and classification of electroencephalography data

    Get PDF

    27th Annual Computational Neuroscience Meeting (CNS*2018): Part One

    Get PDF

    29th Annual Computational Neuroscience Meeting: CNS*2020

    Get PDF
    Meeting abstracts This publication was funded by OCNS. The Supplement Editors declare that they have no competing interests. Virtual | 18-22 July 202
    corecore