4,685 research outputs found

    The iso-response method

    Get PDF
    Throughout the nervous system, neurons integrate high-dimensional input streams and transform them into an output of their own. This integration of incoming signals involves filtering processes and complex non-linear operations. The shapes of these filters and non-linearities determine the computational features of single neurons and their functional roles within larger networks. A detailed characterization of signal integration is thus a central ingredient to understanding information processing in neural circuits. Conventional methods for measuring single-neuron response properties, such as reverse correlation, however, are often limited by the implicit assumption that stimulus integration occurs in a linear fashion. Here, we review a conceptual and experimental alternative that is based on exploring the space of those sensory stimuli that result in the same neural output. As demonstrated by recent results in the auditory and visual system, such iso-response stimuli can be used to identify the non-linearities relevant for stimulus integration, disentangle consecutive neural processing steps, and determine their characteristics with unprecedented precision. Automated closed-loop experiments are crucial for this advance, allowing rapid search strategies for identifying iso-response stimuli during experiments. Prime targets for the method are feed-forward neural signaling chains in sensory systems, but the method has also been successfully applied to feedback systems. Depending on the specific question, “iso-response” may refer to a predefined firing rate, single-spike probability, first-spike latency, or other output measures. Examples from different studies show that substantial progress in understanding neural dynamics and coding can be achieved once rapid online data analysis and stimulus generation, adaptive sampling, and computational modeling are tightly integrated into experiments

    Passive stabilization for large space systems

    Get PDF
    The optimal tuning of multiple tuned-mass dampers for the transient vibration damping of large space structures is investigated. A multidisciplinary approach is used. Structural dynamic techniques are applied to gain physical insight into absorber/structure interaction and to optimize specific cases. Modern control theory and parameter optimization techniques are applied to the general optimization problem. A design procedure for multi-absorber multi-DOF vibration damping problems is presented. Classical dynamic models are extended to investigate the effects of absorber placement, existing structural damping, and absorber cross-coupling on the optimal design synthesis. The control design process for the general optimization problem is formulated as a linear output feedback control problem via the development of a feedback control canonical form. The techniques are applied to sample micro-g and pointing problems on the NASA dual keel space station

    Neurons in Primate Visual Cortex Alternate between Responses to Multiple Stimuli in Their Receptive Field

    Get PDF
    A fundamental question concerning representation of the visual world in our brain is how a cortical cell responds when presented with more than a single stimulus. We find supportive evidence that most cells presented with a pair of stimuli respond predominantly to one stimulus at a time, rather than a weighted average response. Traditionally, the firing rate is assumed to be a weighted average of the firing rates to the individual stimuli (response-averaging model) (Bundesen et al., 2005). Here, we also evaluate a probability-mixing model (Bundesen et al., 2005), where neurons temporally multiplex the responses to the individual stimuli. This provides a mechanism by which the representational identity of multiple stimuli in complex visual scenes can be maintained despite the large receptive fields in higher extrastriate visual cortex in primates. We compare the two models through analysis of data from single cells in the middle temporal visual area (MT) of rhesus monkeys when presented with two separate stimuli inside their receptive field with attention directed to one of the two stimuli or outside the receptive field. The spike trains were modeled by stochastic point processes, including memory effects of past spikes and attentional effects, and statistical model selection between the two models was performed by information theoretic measures as well as the predictive accuracy of the models. As an auxiliary measure, we also tested for uni- or multimodality in interspike interval distributions, and performed a correlation analysis of simultaneously recorded pairs of neurons, to evaluate population behavior

    Approximations of Shannon Mutual Information for Discrete Variables with Applications to Neural Population Coding

    Full text link
    Although Shannon mutual information has been widely used, its effective calculation is often difficult for many practical problems, including those in neural population coding. Asymptotic formulas based on Fisher information sometimes provide accurate approximations to the mutual information but this approach is restricted to continuous variables because the calculation of Fisher information requires derivatives with respect to the encoded variables. In this paper, we consider information-theoretic bounds and approximations of the mutual information based on Kullback--Leibler divergence and R\'{e}nyi divergence. We propose several information metrics to approximate Shannon mutual information in the context of neural population coding. While our asymptotic formulas all work for discrete variables, one of them has consistent performance and high accuracy regardless of whether the encoded variables are discrete or continuous. We performed numerical simulations and confirmed that our approximation formulas were highly accurate for approximating the mutual information between the stimuli and the responses of a large neural population. These approximation formulas may potentially bring convenience to the applications of information theory to many practical and theoretical problems.Comment: 31 pages, 6 figure

    Random Walks Along the Streets and Canals in Compact Cities: Spectral analysis, Dynamical Modularity, Information, and Statistical Mechanics

    Get PDF
    Different models of random walks on the dual graphs of compact urban structures are considered. Analysis of access times between streets helps to detect the city modularity. The statistical mechanics approach to the ensembles of lazy random walkers is developed. The complexity of city modularity can be measured by an information-like parameter which plays the role of an individual fingerprint of {\it Genius loci}. Global structural properties of a city can be characterized by the thermodynamical parameters calculated in the random walks problem.Comment: 44 pages, 22 figures, 2 table

    Tuning Curves, Neuronal Variability, and Sensory Coding

    Get PDF
    Tuning curves are widely used to characterize the responses of sensory neurons to external stimuli, but there is an ongoing debate as to their role in sensory processing. Commonly, it is assumed that a neuron's role is to encode the stimulus at the tuning curve peak, because high firing rates are the neuron's most distinct responses. In contrast, many theoretical and empirical studies have noted that nearby stimuli are most easily discriminated in high-slope regions of the tuning curve. Here, we demonstrate that both intuitions are correct, but that their relative importance depends on the experimental context and the level of variability in the neuronal response. Using three different information-based measures of encoding applied to experimentally measured sensory neurons, we show how the best-encoded stimulus can transition from high-slope to high-firing-rate regions of the tuning curve with increasing noise level. We further show that our results are consistent with recent experimental findings that correlate neuronal sensitivities with perception and behavior. This study illustrates the importance of the noise level in determining the encoding properties of sensory neurons and provides a unified framework for interpreting how the tuning curve and neuronal variability relate to the overall role of the neuron in sensory encoding

    Optimal anticipatory control as a theory of motor preparation

    Get PDF
    Supported by a decade of primate electrophysiological experiments, the prevailing theory of neural motor control holds that movement generation is accomplished by a preparatory process that progressively steers the state of the motor cortex into a movement-specific optimal subspace prior to movement onset. The state of the cortex then evolves from these optimal subspaces, producing patterns of neural activity that serve as control inputs to the musculature. This theory, however, does not address the following questions: what characterizes the optimal subspace and what are the neural mechanisms that underlie the preparatory process? We address these questions with a circuit model of movement preparation and control. Specifically, we propose that preparation can be achieved by optimal feedback control (OFC) of the cortical state via a thalamo-cortical loop. Under OFC, the state of the cortex is selectively controlled along state-space directions that have future motor consequences, and not in other inconsequential ones. We show that OFC enables fast movement preparation and explains the observed orthogonality between preparatory and movement-related monkey motor cortex activity. This illustrates the importance of constraining new theories of neural function with experimental data. However, as recording technologies continue to improve, a key challenge is to extract meaningful insights from increasingly large-scale neural recordings. Latent variable models (LVMs) are powerful tools for addressing this challenge due to their ability to identify the low-dimensional latent variables that best explain these large data sets. One shortcoming of most LVMs, however, is that they assume a Euclidean latent space, while many kinematic variables, such as head rotations and the configuration of an arm, are naturally described by variables that live on non-Euclidean latent spaces (e.g., SO(3) and tori). To address this shortcoming, we propose the Manifold Gaussian Process Latent Variable Model, a method for simultaneously inferring nonparametric tuning curves and latent variables on non-Euclidean latent spaces. We show that our method is able to correctly infer the latent ring topology of the fly and mouse head direction circuits.This work was supported by a Trinity-Henry Barlow scholarship and a scholarship from the Ministry of Education, ROC Taiwan

    Eye velocity gain fields for visuo- motor coordinate transformations

    Get PDF
    ’Gain-field-like’ tuning behavior is characterized by a modulation of the neuronal response depending on a certain variable, without changing the actual receptive field characteristics in relation to another variable. Eye position gain fields were first observed in area 7a of the posterior parietal cortex (PPC), where visually responsive neurons are modulated by ocular position. Analysis of artificial neural networks has shown that this type of tuning function might comprise the neuronal substrate for coordinate transformations. In this work, neuronal activity in the dorsal medial superior temporal area (MSTd) has been analyzed with an focus on it’s involvement in oculomotor control. MSTd is part of the extrastriate visual cortex and located in the PPC. Lesion studies suggested a participation of this cortical area in the control of eye movements. Inactivation of MSTd severely impairs the optokinetic response (OKR), which is an reflex-like kind of eye movement that compensates for motion of the whole visual scene. Using a novel, information-theory based approach for neuronal data analysis, we were able to identify those visual and eye movement related signals which were most correlated to the mean rate of spiking activity in MSTd neurons during optokinetic stimulation. In a majority of neurons firing rate was non-linearly related to a combination of retinal image velocity and eye velocity. The observed neuronal latency relative to these signals is in line with a system-level model of OKR, where an efference copy of the motor command signal is used to generate an internal estimate of the head-centered stimulus velocity signal. Tuning functions were obtained by using a probabilistic approach. In most MSTd neurons these functions exhibited gain-field-like shapes, with eye velocity modulating the visual response in a multiplicative manner. Population analysis revealed a large diversity of tuning forms including asymmetric and non-separable functions. The distribution of gain fields was almost identical to the predictions from a neural network model trained to perform the summation of image and eye velocity. These findings therefore strongly support the hypothesis of MSTd’s participation in the OKR control system by implementing the transformation from retinal image velocity to an estimate of stimulus velocity. In this sense, eye velocity gain fields constitute an intermediate step in transforming the eye-centered to a head-centered visual motion signal.Another aspect that was addressed in this work was the comparison of the irregularity of MSTd spiking activity during optokinetic response with the behavior during pure visual stimulation. The goal of this study was an evaluation of potential neuronal mechanisms underlying the observed gain field behavior. We found that both inter- and intra-trial variability were decreased with increasing retinal image velocity, but increased with eye velocity. This observation argues against a symmetrical integration of driving and modulating inputs. Instead, we propose an architecture where multiplicative gain modulation is achieved by simultaneous increase of excitatory and inhibitory background synaptic input. A conductance-based single-compartment model neuron was able to reproduce realistic gain modulation and the observed stimulus-dependence of neural variability, at the same time. In summary, this work leads to improved knowledge about MSTd’s role in visuomotor transformation by analyzing various functional and mechanistic aspects of eye velocity gain fields on a systems-, network-, and neuronal level

    Persistent Homology in Sparse Regression and its Application to Brain Morphometry

    Full text link
    Sparse systems are usually parameterized by a tuning parameter that determines the sparsity of the system. How to choose the right tuning parameter is a fundamental and difficult problem in learning the sparse system. In this paper, by treating the the tuning parameter as an additional dimension, persistent homological structures over the parameter space is introduced and explored. The structures are then further exploited in speeding up the computation using the proposed soft-thresholding technique. The topological structures are further used as multivariate features in the tensor-based morphometry (TBM) in characterizing white matter alterations in children who have experienced severe early life stress and maltreatment. These analyses reveal that stress-exposed children exhibit more diffuse anatomical organization across the whole white matter region.Comment: submitted to IEEE Transactions on Medical Imagin

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved
    corecore