77 research outputs found

    A spike-based head-movement and echolocation model of the bat superior colliculus

    Get PDF
    Echolocating bats use sonar to sense their environment and hunt for food in darkness. To understand this unusual sensory system from a computational perspective with aspirations towards developing high performance electronic implementations, we study the bat brain. The midbrain superior colliculus (SC) has been shown (in many species) to support multisensory integration and orientation behaviors, namely eye saccades and head turns. Previous computational models of the SC have emphasized the behavior typical to monkeys, barn owls, and cats. Using unique neurobiological data for the bat and incorporating knowledge from other species, a computational spiking model has been developed to produce both head-movement and sonar vocalization. The model accomplishes this with simple neuron equations and synapses, which is promising for implementation on a VLSI chip. This model can serve as a foundation for further developments, using new data from bat experiments, and be easily connected to spiking motor and vocalization systems

    The Eye of a Mathematical Physicist

    Get PDF
    In this essay we are searching for neural correlates of ‘doing mathematical physics'. We introduce a toy model of a mathematical physicist, a brain connected with the outside world only by vision and saccadic eye movements and interacting with a computer screen. First, we describe the neuroanatomy of the visuo-saccadic system and Listing's law, which binds saccades and the optics of the eye. Then we explain space-time transformations in the superior colliculus, the performance of a canonical cortical circuit in the frontal eye field and finally the recurrent interaction of both areas, which leads to a coherent percept of space in spite of saccades. This sets the stage in the brain for doing mathematical physics, which is analyzed in simple example

    Contribution of the Primate Frontal Cortex to Eye Movements and Neuronal Activity in the Superior Colliculus

    Get PDF
    Humans and non-human primates must precisely align the eyes on an object to view it with high visual acuity. An important role of the oculomotor system is to generate accurate eye movements, such as saccades, toward a target. Given that each eye has only six muscles that rotate the eye in three degrees of freedom, this relatively simple volitional movement has allowed researchers to well-characterize the brain areas involved in their generation. In particular, the midbrain Superior Colliculus (SC), is recognized as having a primary role in the generation of visually-guided saccades via the integration of sensory and cognitive information. One important source of sensory and cognitive information to the SC is the Frontal Eye Fields (FEF). The role of the FEF and SC in visually-guided saccades has been well-studied using anatomical and functional techniques, but only a handful of studies have investigated how these areas work together to produce saccades. While it is assumed that the FEF exerts its influence on saccade generation though the SC, it remains unknown what happens in the SC when the FEF is suddenly inactivated. To test this prediction, I use the combined approach of FEF cryogenic inactivation and SC neuronal recordings, although it also provides a valuable opportunity to understand how FEF inputs to the SC govern saccade preparation. Nonetheless, it was first necessary to characterize the eye movement deficits following FEF inactivation, as it was unknown how a large and reversible FEF inactivation would influence saccade behaviour, or whether cortical areas influence fixational eye movements (e.g. microsaccades). Four major results emerged from this thesis. First, FEF inactivation delayed saccade reaction times (SRT) in both directions. Second, FEF inactivation impaired microsaccade generation and also selectively reduced microsaccades following peripheral cues. Third, FEF inactivation decreased visual, cognitive, and saccade-related activity in the ipsilesional SC. Fourth, the delayed onset of saccade-related SC activity best explained SRT increases during FEF inactivation, implicating one mechanism for how FEF inputs govern saccade preparation. Together, these results provide new insights into the FEF\u27s role in saccade and microsaccade behaviour, and how the oculomotor system commits to a saccade

    How Laminar Frontal Cortex and Basal Ganglia Circuits Interact to Control Planned and Reactive Saccades

    Full text link
    The basal ganglia and frontal cortex together allow animals to learn adaptive responses that acquire rewards when prepotent reflexive responses are insufficient. Anatomical studies show a rich pattern of interactions between the basal ganglia and distinct frontal cortical layers. Analysis of the laminar circuitry of the frontal cortex, together with its interactions with the basal ganglia, motor thalamus, superior colliculus, and inferotemporal and parietal cortices, provides new insight into how these brain regions interact to learn and perform complexly conditioned behaviors. A neural model whose cortical component represents the frontal eye fields captures these interacting circuits. Simulations of the neural model illustrate how it provides a functional explanation of the dynamics of 17 physiologically identified cell types found in these areas. The model predicts how action planning or priming (in cortical layers III and VI) is dissociated from execution (in layer V), how a cue may serve either as a movement target or as a discriminative cue to move elsewhere, and how the basal ganglia help choose among competing actions. The model simulates neurophysiological, anatomical, and behavioral data about how monkeys perform saccadic eye movement tasks, including fixation; single saccade, overlap, gap, and memory-guided saccades; anti-saccades; and parallel search among distractors.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-l-0409, N00014-92-J-1309, N00014-95-1-0657); National Science Foundation (IRI-97-20333)

    The computational neurology of active vision

    Get PDF
    In this thesis, we appeal to recent developments in theoretical neurobiology – namely, active inference – to understand the active visual system and its disorders. Chapter 1 reviews the neurobiology of active vision. This introduces some of the key conceptual themes around attention and inference that recur through subsequent chapters. Chapter 2 provides a technical overview of active inference, and its interpretation in terms of message passing between populations of neurons. Chapter 3 applies the material in Chapter 2 to provide a computational characterisation of the oculomotor system. This deals with two key challenges in active vision: deciding where to look, and working out how to look there. The homology between this message passing and the brain networks solving these inference problems provide a basis for in silico lesion experiments, and an account of the aberrant neural computations that give rise to clinical oculomotor signs (including internuclear ophthalmoplegia). Chapter 4 picks up on the role of uncertainty resolution in deciding where to look, and examines the role of beliefs about the quality (or precision) of data in perceptual inference. We illustrate how abnormal prior beliefs influence inferences about uncertainty and give rise to neuromodulatory changes and visual hallucinatory phenomena (of the sort associated with synucleinopathies). We then demonstrate how synthetic pharmacological perturbations that alter these neuromodulatory systems give rise to the oculomotor changes associated with drugs acting upon these systems. Chapter 5 develops a model of visual neglect, using an oculomotor version of a line cancellation task. We then test a prediction of this model using magnetoencephalography and dynamic causal modelling. Chapter 6 concludes by situating the work in this thesis in the context of computational neurology. This illustrates how the variational principles used here to characterise the active visual system may be generalised to other sensorimotor systems and their disorders

    Computational Study of Multisensory Gaze-Shift Planning

    Get PDF
    In response to appearance of multimodal events in the environment, we often make a gaze-shift in order to focus the attention and gather more information. Planning such a gaze-shift involves three stages: 1) to determine the spatial location for the gaze-shift, 2) to find out the time to initiate the gaze-shift, 3) to work out a coordinated eye-head motion to execute the gaze-shift. There have been a large number of experimental investigations to inquire the nature of multisensory and oculomotor information processing in any of these three levels separately. Here in this thesis, we approach this problem as a single executive program and propose computational models for them in a unified framework. The first spatial problem is viewed as inferring the cause of cross-modal stimuli, whether or not they originate from a common source (chapter 2). We propose an evidence-accumulation decision-making framework, and introduce a spatiotemporal similarity measure as the criterion to choose to integrate the multimodal information or not. The variability of report of sameness, observed in experiments, is replicated as functions of the spatial and temporal patterns of target presentations. To solve the second temporal problem, a model is built upon the first decision-making structure (chapter 3). We introduce an accumulative measure of confidence on the chosen causal structure, as the criterion for initiation of action. We propose that gaze-shift is implemented when this confidence measure reaches a threshold. The experimentally observed variability of reaction time is simulated as functions of spatiotemporal and reliability features of the cross-modal stimuli. The third motor problem is considered to be solved downstream of the two first networks (chapter 4). We propose a kinematic strategy that coordinates eye-in-head and head-on-shoulder movements, in both spatial and temporal dimensions, in order to shift the line of sight towards the inferred position of the goal. The variabilities in contributions of eyes and head movements to gaze-shift are modeled as functions of the retinal error and the initial orientations of eyes and head. The three models should be viewed as parts of a single executive program that integrates perceptual and motor processing across time and space

    Spatial Transformations in Frontal Cortex During Memory-Guided Head-Unrestrained Gaze Shifts

    Get PDF
    We constantly orient our line of sight (i.e., gaze) to external objects in our environment. One of the central questions in sensorimotor neuroscience concerns how visual input (registered on retina) is transformed into appropriate signals that drive gaze shift, comprised of coordinated movement of the eyes and the head. In this dissertation I investigated the function of a node in the frontal cortex, known as the frontal eye field (FEF) by investigating the spatial transformations that occur within this structure. FEF is implicated as a key node in gaze control and part of the working memory network. I recorded the activity of single FEF neurons in head-unrestrained monkeys as they performed a simple memory-guided gaze task which required delayed gaze shifts (by a few hundred milliseconds) towards remembered visual stimuli. By utilizing an elaborate analysis method which fits spatial models to neuronal response fields, I identified the spatial code embedded in neuronal activity related to vision (visual response), memory (delay response), and gaze shift (movement response). First (Chapter 2), spatial transformations that occur within the FEF were identified by comparing spatial codes in visual and movement responses. I showed eye-centered dominance in both neuronal responses (and excluded head- and space-centered coding); however, whereas the visual response encoded target position, the movement response encoded the position of the imminent gaze shift (and not its independent eye and head components), and this was observed even within single neurons. In Chapter 3, I characterized the time-course for this target-to-gaze transition by identifying the spatial code during the intervening delay period. The results from this study highlighted two major transitions within the FEF: a gradual transition during the visual-delay-movement extent of delay-responsive neurons, followed by a discrete transition between delay-responsive neurons and pre-saccadic neurons that exclusively fire around the time of gaze movement. These results show that the FEF is involved in memory-based transformations in gaze control; but instead of encoding specific movement parameters (eye and head) it encodes the desired gaze endpoint. The representations of the movement goal are subject to noise and this noise accumulates at different stages related to different mechanisms
    • …
    corecore