3 research outputs found
Recommended from our members
Identifying Functional Bases for Multidimensional Neural Computations
Current dimensionality reduction methods can identify relevant subspaces for neural computations, but do not favor one basis over the other within the relevant subspace. Finding the appropriate basis can further simplify the description of the nonlinear computation with respect to the relevant variables, making it easier to elucidate the underlying neural computation and make hypotheses about the neural circuitry giving rise to the observed responses. Part of the problem is that, although some of the dimensionality reduction methods can identify many of the relevant dimensions, it is usually difficult to map out and/or interpret the nonlinear transformation with respect to more than a few relevant dimensions simultaneously without some simplifying
assumptions. While recent approaches make it possible to create predictive models based on many relevant dimensions simultaneously, there still remains the need to relate such predictive models to the mechanistic descriptions of the operation of underlying neural circuitry. Here we demonstrate that transforming to a basis within the relevant subspace where the neural computation is best described by a given nonlinear function often makes it easier to interpret the computation and describe it with a small number of parameters. We refer to the corresponding basis as the functional basis, and illustrate the utility of such transformation in the context of logical OR and logical AND functions. We show that although dimensionality reduction methods
such as spike-triggered covariance are able to find a relevant subspace, they often produce dimensions that are difficult to interpret and do not correspond to a functional basis. The functional features can be found using a maximum likelihood approach. The results are illustrated using simulated neurons and recordings from retinal ganglion cells. The resulting features are uniquely defined, non-orthogonal, and make it easier to relate computational and mechanistic models to each other
Machine Learning As Tool And Theory For Computational Neuroscience
Computational neuroscience is in the midst of constructing a new framework for understanding the brain based on the ideas and methods of machine learning. This is effort has been encouraged, in part, by recent advances in neural network models. It is also driven by a recognition of the complexity of neural computation and the challenges that this poses for neuroscience’s methods. In this dissertation, I first work to describe these problems of complexity that have prompted a shift in focus. In particular, I develop machine learning tools for neurophysiology that help test whether tuning curves and other statistical models in fact capture the meaning of neural activity. Then, taking up a machine learning framework for understanding, I consider theories about how neural computation emerges from experience. Specifically, I develop hypotheses about the potential learning objectives of sensory plasticity, the potential learning algorithms in the brain, and finally the consequences for sensory representations of learning with such algorithms. These hypotheses pull from advances in several areas of machine learning, including optimization, representation learning, and deep learning theory. Each of these subfields has insights for neuroscience, offering up links for a chain of knowledge about how we learn and think. Together, this dissertation helps to further an understanding of the brain in the lens of machine learning
Recommended from our members
Identifying Functional Bases for Multidimensional Neural Computations
Current dimensionality reduction methods can identify relevant subspaces for neural computations, but do not favor one basis over the other within the relevant subspace. Finding the appropriate basis can further simplify the description of the nonlinear computation with respect to the relevant variables, making it easier to elucidate the underlying neural computation and make hypotheses about the neural circuitry giving rise to the observed responses. Part of the problem is that, although some of the dimensionality reduction methods can identify many of the relevant dimensions, it is usually difficult to map out and/or interpret the nonlinear transformation with respect to more than a few relevant dimensions simultaneously without some simplifying assumptions. While recent approaches make it possible to create predictive models based on many relevant dimensions simultaneously, there still remains the need to relate such predictive models to the mechanistic descriptions of the operation of underlying neural circuitry. Here we demonstrate that transforming to a basis within the relevant subspace where the neural computation is best described by a given nonlinear function often makes it easier to interpret the computation and describe it with a small number of parameters. We refer to the corresponding basis as the functional basis, and illustrate the utility of such transformation in the context of logical OR and logical AND functions. We show that although dimensionality reduction methods such as spike-triggered covariance are able to find a relevant subspace, they often produce dimensions that are difficult to interpret and do not correspond to a functional basis. The functional features can be found using a maximum likelihood approach. The results are illustrated using simulated neurons and recordings from retinal ganglion cells. The resulting features are uniquely defined, non-orthogonal, and make it easier to relate computational and mechanistic models to each other