77 research outputs found
A spike-based head-movement and echolocation model of the bat superior colliculus
Echolocating bats use sonar to sense their environment and hunt for food in darkness. To understand this unusual sensory system from a computational perspective with aspirations towards developing high performance electronic implementations, we study the bat brain. The midbrain superior colliculus (SC) has been shown (in many species) to support multisensory integration and orientation behaviors, namely eye saccades and head turns. Previous computational models of the SC have emphasized the behavior typical to monkeys, barn owls, and cats. Using unique neurobiological data for the bat and incorporating knowledge from other species, a computational spiking model has been developed to produce both head-movement and sonar vocalization. The model accomplishes this with simple neuron equations and synapses, which is promising for implementation on a VLSI chip. This model can serve as a foundation for further developments, using new data from bat experiments, and be easily connected to spiking motor and vocalization systems
Recommended from our members
Genetic dissection of circuits underlying the modular structure of the Superior Colliculus
In order to successfully interact with the environment, animals need to produce accurate movements towards specific positions in space. A crucial region of the brain that guides such goal-oriented movements is the superior colliculus (SC), an evolutionary conserved structure of the midbrain. While several lines of research in different model organisms have confirmed that the SC contributes to the initiation of orienting movements, how functionally distinct neuronal groups within the SC are organized to support the production of such motor outputs is poorly understood.
One of the reasons why the intrinsic circuit organization of the SC remains elusive is the lack of genetic characterization of the neuronal populations of the motor SC. Here, we performed RNAseq to screen for genetic markers for neuronal subpopulations in the motor SC. We identified a transcription factor, Pitx2, which is exclusively expressed in a subpopulation of glutamatergic neurons in the motor domain of the SC. Strikingly, this population of neurons displays a non-homogenous distribution within the motor layer of the SC, being organised in clusters along the mediolateral and anteroposterior axis. We mapped the pre-synaptic network and the post-synaptic targets of Pitx2ON neurons, unveiling that this modular population receives direct inputs from motor and sensory cortical regions, as well as several midbrain nuclei involved in movement control, and sends projection along the cephalomotor pathway. We then asked whether these modules may act as functional units, each integrating multimodal sensory information and encoding a specific feature of head movement, the main ethologically relevant orienting behaviour in rodents. Optogenetic activation of this modular population in freely moving animals produced a stereotyped, robust head motion characterised by a pronounced quantal nature; furthermore, the amplitude of the elicited head movement varied based on the modular unit activated. Our results suggest that distinct clusters of genetically defined neurons produce head displacement along a characteristic vector.
In conclusion, we found that a population of premotor neurons in the SC is organised in a modular conformation and we suggest that such modularity may represent a physical implementation of a discontinuous motor map for orienting movements encoded in the mouse SC. Our work complements previous observations of periodicity in SC circuitry, as well as its afferent and efferent systems. Exploiting the genetic toolkit available in the mouse, our work begins to address the functional relevance of this modularity and paves the way for future experiments to investigate principles of sensorimotor integration in SC circuits.MR
Recommended from our members
On the Role of Sensory Cancellation and Corollary Discharge in Neural Coding and Behavior
Studies of cerebellum-like circuits in fish have demonstrated that synaptic plasticity shapes the motor corollary discharge responses of granule cells into highly-specific predictions of self- generated sensory input. However, the functional significance of such predictions, known as negative images, has not been directly tested. Here we provide evidence for improvements in neural coding and behavioral detection of prey-like stimuli due to negative images. In addition, we find that manipulating synaptic plasticity leads to specific changes in circuit output that disrupt neural coding and detection of prey-like stimuli. These results link synaptic plasticity, neural coding, and behavior and also provide a circuit-level account of how combining external sensory input with internally-generated predictions enhances sensory processing. In addition, the mammalian dorsal cochlear nucleus (DCN) integrates auditory nerve input with a diverse array of sensory and motor signals processed within circuity similar to the cerebellum. Yet how the DCN contributes to early auditory processing has been a longstanding puzzle. Using electrophysiological recordings in mice during licking behavior we show that DCN neurons are largely unaffected by self-generated sounds while remaining sensitive to external acoustic stimuli. Recordings in deafened mice, together with neural activity manipulations, indicate that self-generated sounds are cancelled by non-auditory signals conveyed by mossy fibers. In addition, DCN neurons exhibit gradual reductions in their responses to acoustic stimuli that are temporally correlated with licking. Together, these findings suggest that DCN may act as an adaptive filter for cancelling self-generated sounds. Adaptive filtering has been established previously for cerebellum-like sensory structures in fish suggesting a conserved function for such structures across vertebrates
The Eye of a Mathematical Physicist
In this essay we are searching for neural correlates of ‘doing mathematical physics'. We introduce a toy model of a mathematical physicist, a brain connected with the outside world only by vision and saccadic eye movements and interacting with a computer screen. First, we describe the neuroanatomy of the visuo-saccadic system and Listing's law, which binds saccades and the optics of the eye. Then we explain space-time transformations in the superior colliculus, the performance of a canonical cortical circuit in the frontal eye field and finally the recurrent interaction of both areas, which leads to a coherent percept of space in spite of saccades. This sets the stage in the brain for doing mathematical physics, which is analyzed in simple example
Contribution of the Primate Frontal Cortex to Eye Movements and Neuronal Activity in the Superior Colliculus
Humans and non-human primates must precisely align the eyes on an object to view it with high visual acuity. An important role of the oculomotor system is to generate accurate eye movements, such as saccades, toward a target. Given that each eye has only six muscles that rotate the eye in three degrees of freedom, this relatively simple volitional movement has allowed researchers to well-characterize the brain areas involved in their generation. In particular, the midbrain Superior Colliculus (SC), is recognized as having a primary role in the generation of visually-guided saccades via the integration of sensory and cognitive information.
One important source of sensory and cognitive information to the SC is the Frontal Eye Fields (FEF). The role of the FEF and SC in visually-guided saccades has been well-studied using anatomical and functional techniques, but only a handful of studies have investigated how these areas work together to produce saccades. While it is assumed that the FEF exerts its influence on saccade generation though the SC, it remains unknown what happens in the SC when the FEF is suddenly inactivated. To test this prediction, I use the combined approach of FEF cryogenic inactivation and SC neuronal recordings, although it also provides a valuable opportunity to understand how FEF inputs to the SC govern saccade preparation. Nonetheless, it was first necessary to characterize the eye movement deficits following FEF inactivation, as it was unknown how a large and reversible FEF inactivation would influence saccade behaviour, or whether cortical areas influence fixational eye movements (e.g. microsaccades).
Four major results emerged from this thesis. First, FEF inactivation delayed saccade reaction times (SRT) in both directions. Second, FEF inactivation impaired microsaccade generation and also selectively reduced microsaccades following peripheral cues. Third, FEF inactivation decreased visual, cognitive, and saccade-related activity in the ipsilesional SC. Fourth, the delayed onset of saccade-related SC activity best explained SRT increases during FEF inactivation, implicating one mechanism for how FEF inputs govern saccade preparation. Together, these results provide new insights into the FEF\u27s role in saccade and microsaccade behaviour, and how the oculomotor system commits to a saccade
How Laminar Frontal Cortex and Basal Ganglia Circuits Interact to Control Planned and Reactive Saccades
The basal ganglia and frontal cortex together allow animals to learn adaptive responses that acquire rewards when prepotent reflexive responses are insufficient. Anatomical studies show a rich pattern of interactions between the basal ganglia and distinct frontal cortical layers. Analysis of the laminar circuitry of the frontal cortex, together with its interactions with the basal ganglia, motor thalamus, superior colliculus, and inferotemporal and parietal cortices, provides new insight into how these brain regions interact to learn and perform complexly conditioned behaviors. A neural model whose cortical component represents the frontal eye fields captures these interacting circuits. Simulations of the neural model illustrate how it provides a functional explanation of the dynamics of 17 physiologically identified cell types found in these areas. The model predicts how action planning or priming (in cortical layers III and VI) is dissociated from execution (in layer V), how a cue may serve either as a movement target or as a discriminative cue to move elsewhere, and how the basal ganglia help choose among competing actions. The model simulates neurophysiological, anatomical, and behavioral data about how monkeys perform saccadic eye movement tasks, including fixation; single saccade, overlap, gap, and memory-guided saccades; anti-saccades; and parallel search among distractors.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-l-0409, N00014-92-J-1309, N00014-95-1-0657); National Science Foundation (IRI-97-20333)
The computational neurology of active vision
In this thesis, we appeal to recent developments in theoretical neurobiology – namely, active inference – to understand the active visual system and its disorders. Chapter 1 reviews the neurobiology of active vision. This introduces some of the key conceptual themes around attention and inference that recur through subsequent chapters. Chapter 2 provides a technical overview of active inference, and its interpretation in terms of message passing between populations of neurons. Chapter 3 applies the material in Chapter 2 to provide a computational characterisation of the oculomotor system. This deals with two key challenges in active vision: deciding where to look, and working out how to look there. The homology between this message passing and the brain networks solving these inference problems provide a basis for in silico lesion experiments, and an account of the aberrant neural computations that give rise to clinical oculomotor signs (including internuclear ophthalmoplegia). Chapter 4 picks up on the role of uncertainty resolution in deciding where to look, and examines the role of beliefs about the quality (or precision) of data in perceptual inference. We illustrate how abnormal prior beliefs influence inferences about uncertainty and give rise to neuromodulatory changes and visual hallucinatory phenomena (of the sort associated with synucleinopathies). We then demonstrate how synthetic pharmacological perturbations that alter these neuromodulatory systems give rise to the oculomotor changes associated with drugs acting upon these systems. Chapter 5 develops a model of visual neglect, using an oculomotor version of a line cancellation task. We then test a prediction of this model using magnetoencephalography and dynamic causal modelling. Chapter 6 concludes by situating the work in this thesis in the context of computational neurology. This illustrates how the variational principles used here to characterise the active visual system may be generalised to other sensorimotor systems and their disorders
Computational Study of Multisensory Gaze-Shift Planning
In response to appearance of multimodal events in the environment, we often make a gaze-shift in order to focus the attention and gather more information. Planning such a gaze-shift involves three stages: 1) to determine the spatial location for the gaze-shift, 2) to find out the time to initiate the gaze-shift, 3) to work out a coordinated eye-head motion to execute the gaze-shift. There have been a large number of experimental investigations to inquire the nature of multisensory and oculomotor information processing in any of these three levels separately. Here in this thesis, we approach this problem as a single executive program and propose computational models for them in a unified framework.
The first spatial problem is viewed as inferring the cause of cross-modal stimuli, whether or not they originate from a common source (chapter 2). We propose an evidence-accumulation decision-making framework, and introduce a spatiotemporal similarity measure as the criterion to choose to integrate the multimodal information or not. The variability of report of sameness, observed in experiments, is replicated as functions of the spatial and temporal patterns of target presentations. To solve the second temporal problem, a model is built upon the first decision-making structure (chapter 3). We introduce an accumulative measure of confidence on the chosen causal structure, as the criterion for initiation of action. We propose that gaze-shift is implemented when this confidence measure reaches a threshold. The experimentally observed variability of reaction time is simulated as functions of spatiotemporal and reliability features of the cross-modal stimuli. The third motor problem is considered to be solved downstream of the two first networks (chapter 4). We propose a kinematic strategy that coordinates eye-in-head and head-on-shoulder movements, in both spatial and temporal dimensions, in order to shift the line of sight towards the inferred position of the goal. The variabilities in contributions of eyes and head movements to gaze-shift are modeled as functions of the retinal error and the initial orientations of eyes and head. The three models should be viewed as parts of a single executive program that integrates perceptual and motor processing across time and space
Spatial Transformations in Frontal Cortex During Memory-Guided Head-Unrestrained Gaze Shifts
We constantly orient our line of sight (i.e., gaze) to external objects in our environment. One of the central questions in sensorimotor neuroscience concerns how visual input (registered on retina) is transformed into appropriate signals that drive gaze shift, comprised of coordinated movement of the eyes and the head. In this dissertation I investigated the function of a node in the frontal cortex, known as the frontal eye field (FEF) by investigating the spatial transformations that occur within this structure. FEF is implicated as a key node in gaze control and part of the working memory network. I recorded the activity of single FEF neurons in head-unrestrained monkeys as they performed a simple memory-guided gaze task which required delayed gaze shifts (by a few hundred milliseconds) towards remembered visual stimuli. By utilizing an elaborate analysis method which fits spatial models to neuronal response fields, I identified the spatial code embedded in neuronal activity related to vision (visual response), memory (delay response), and gaze shift (movement response). First (Chapter 2), spatial transformations that occur within the FEF were identified by comparing spatial codes in visual and movement responses. I showed eye-centered dominance in both neuronal responses (and excluded head- and space-centered coding); however, whereas the visual response encoded target position, the movement response encoded the position of the imminent gaze shift (and not its independent eye and head components), and this was observed even within single neurons. In Chapter 3, I characterized the time-course for this target-to-gaze transition by identifying the spatial code during the intervening delay period. The results from this study highlighted two major transitions within the FEF: a gradual transition during the visual-delay-movement extent of delay-responsive neurons, followed by a discrete transition between delay-responsive neurons and pre-saccadic neurons that exclusively fire around the time of gaze movement. These results show that the FEF is involved in memory-based transformations in gaze control; but instead of encoding specific movement parameters (eye and head) it encodes the desired gaze endpoint. The representations of the movement goal are subject to noise and this noise accumulates at different stages related to different mechanisms
- …