2,834 research outputs found

    An Information-Theoretic Approach for Evaluating Probabilistic Tuning Functions of Single Neurons

    Get PDF
    Neuronal tuning functions can be expressed by the conditional probability of observing a spike given any combination of explanatory variables. However, accurately determining such probabilistic tuning functions from experimental data poses several challenges such as finding the right combination of explanatory variables and determining their proper neuronal latencies. Here we present a novel approach of estimating and evaluating such probabilistic tuning functions, which offers a solution for these problems. By maximizing the mutual information between the probability distributions of spike occurrence and the variables, their neuronal latency can be estimated, and the dependence of neuronal activity on different combinations of variables can be measured. This method was used to analyze neuronal activity in cortical area MSTd in terms of dependence on signals related to eye and retinal image movement. Comparison with conventional feature detection and regression analysis techniques shows that our method offers distinct advantages, if the dependence does not match the regression model

    Eye velocity gain fields for visuo- motor coordinate transformations

    Get PDF
    ’Gain-field-like’ tuning behavior is characterized by a modulation of the neuronal response depending on a certain variable, without changing the actual receptive field characteristics in relation to another variable. Eye position gain fields were first observed in area 7a of the posterior parietal cortex (PPC), where visually responsive neurons are modulated by ocular position. Analysis of artificial neural networks has shown that this type of tuning function might comprise the neuronal substrate for coordinate transformations. In this work, neuronal activity in the dorsal medial superior temporal area (MSTd) has been analyzed with an focus on it’s involvement in oculomotor control. MSTd is part of the extrastriate visual cortex and located in the PPC. Lesion studies suggested a participation of this cortical area in the control of eye movements. Inactivation of MSTd severely impairs the optokinetic response (OKR), which is an reflex-like kind of eye movement that compensates for motion of the whole visual scene. Using a novel, information-theory based approach for neuronal data analysis, we were able to identify those visual and eye movement related signals which were most correlated to the mean rate of spiking activity in MSTd neurons during optokinetic stimulation. In a majority of neurons firing rate was non-linearly related to a combination of retinal image velocity and eye velocity. The observed neuronal latency relative to these signals is in line with a system-level model of OKR, where an efference copy of the motor command signal is used to generate an internal estimate of the head-centered stimulus velocity signal. Tuning functions were obtained by using a probabilistic approach. In most MSTd neurons these functions exhibited gain-field-like shapes, with eye velocity modulating the visual response in a multiplicative manner. Population analysis revealed a large diversity of tuning forms including asymmetric and non-separable functions. The distribution of gain fields was almost identical to the predictions from a neural network model trained to perform the summation of image and eye velocity. These findings therefore strongly support the hypothesis of MSTd’s participation in the OKR control system by implementing the transformation from retinal image velocity to an estimate of stimulus velocity. In this sense, eye velocity gain fields constitute an intermediate step in transforming the eye-centered to a head-centered visual motion signal.Another aspect that was addressed in this work was the comparison of the irregularity of MSTd spiking activity during optokinetic response with the behavior during pure visual stimulation. The goal of this study was an evaluation of potential neuronal mechanisms underlying the observed gain field behavior. We found that both inter- and intra-trial variability were decreased with increasing retinal image velocity, but increased with eye velocity. This observation argues against a symmetrical integration of driving and modulating inputs. Instead, we propose an architecture where multiplicative gain modulation is achieved by simultaneous increase of excitatory and inhibitory background synaptic input. A conductance-based single-compartment model neuron was able to reproduce realistic gain modulation and the observed stimulus-dependence of neural variability, at the same time. In summary, this work leads to improved knowledge about MSTd’s role in visuomotor transformation by analyzing various functional and mechanistic aspects of eye velocity gain fields on a systems-, network-, and neuronal level

    Computational Cognitive Neuroscience

    Get PDF
    This chapter provides an overview of the basic research strategies and analytic techniques deployed in computational cognitive neuroscience. On the one hand, “top-down” (or reverse-engineering) strategies are used to infer, from formal characterizations of behavior and cognition, the computational properties of underlying neural mechanisms. On the other hand, “bottom-up” research strategies are used to identify neural mechanisms and to reconstruct their computational capacities. Both of these strategies rely on experimental techniques familiar from other branches of neuroscience, including functional magnetic resonance imaging, single-cell recording, and electroencephalography. What sets computational cognitive neuroscience apart, however, is the explanatory role of analytic techniques from disciplines as varied as computer science, statistics, machine learning, and mathematical physics. These techniques serve to describe neural mechanisms computationally, but also to drive the process of scientific discovery by influencing which kinds of mechanisms are most likely to be identified. For this reason, understanding the nature and unique appeal of computational cognitive neuroscience requires not just an understanding of the basic research strategies that are involved, but also of the formal methods and tools that are being deployed, including those of probability theory, dynamical systems theory, and graph theory

    Computational Cognitive Neuroscience

    Get PDF
    This chapter provides an overview of the basic research strategies and analytic techniques deployed in computational cognitive neuroscience. On the one hand, “top-down” (or reverse-engineering) strategies are used to infer, from formal characterizations of behavior and cognition, the computational properties of underlying neural mechanisms. On the other hand, “bottom-up” research strategies are used to identify neural mechanisms and to reconstruct their computational capacities. Both of these strategies rely on experimental techniques familiar from other branches of neuroscience, including functional magnetic resonance imaging, single-cell recording, and electroencephalography. What sets computational cognitive neuroscience apart, however, is the explanatory role of analytic techniques from disciplines as varied as computer science, statistics, machine learning, and mathematical physics. These techniques serve to describe neural mechanisms computationally, but also to drive the process of scientific discovery by influencing which kinds of mechanisms are most likely to be identified. For this reason, understanding the nature and unique appeal of computational cognitive neuroscience requires not just an understanding of the basic research strategies that are involved, but also of the formal methods and tools that are being deployed, including those of probability theory, dynamical systems theory, and graph theory
    • …
    corecore