290 research outputs found

    System Level Assessment of Motor Control through Patterned Microstimulation in the Superior Colliculus

    Get PDF
    We are immersed in an environment full of sensory information, and without much thought or effort we can produce orienting responses to appropriately react to different stimuli. This seemingly simple and reflexive behavior is accomplished by a very complicated set of neural operations, in which motor systems in the brain must control behavior based on populations of sensory information. The oculomotor or saccadic system is particularly well studied in this regard. Within a visual environment consisting of many potential stimuli, we control our gaze with rapid eye movements, or saccades, in order to foveate visual targets of interest. A key sub-cortical structure involved in this process is the superior colliculus (SC). The SC is a structure in the midbrain which receives visual input and in turn projects to lower-level areas in the brainstem that produce saccades. Interestingly, microstimulation of the SC produces eye movements that match the metrics and kinematics of naturally-evoked saccades. As a result, we explore the role of the SC in saccadic motor control by manually introducing distributions of activity through neural stimulation. Systematic manipulation of microstimulation patterns were used to characterize how ensemble activity in the SC is decoded to generate eye movements. Specifically, we focused on three different facets of saccadic motor control. In the first study, we examine the effective influence of microstimulation parameters on behavior to reveal characteristics of the neural mechanisms underlying saccade generation. In the second study, we experimentally verify the predictions of computational algorithms that are used to describe neural mechanisms for saccade generation. And in the third study, we assess where neural mechanisms for decoding occur within the oculomotor network in order to establish the order of operations necessary for saccade generation. The experiments assess different aspects of saccadic motor control, which collectively, reveal properties and mechanisms that contribute to the comprehensive understanding of signal processing in the oculomotor system

    A parsimonious computational model of visual target position encoding in the superior colliculus

    Get PDF
    International audienceThe superior colliculus (SC) is a brain-stem structure at the crossroad of multiple functional pathways. Several neurophysiological studies suggest that the population of active neurons in the SC encodes the location of a visual target to foveate, pursue or attend to. Although extensive research has been carried out on computational modeling, most of the reported models are often based on complex mechanisms and explain a limited number of experimental results. This suggests that a key aspect may have been overlooked in the design of previous computational models. After a careful study of the literature, we hypothesized that the representation of the whole retinal stimulus (not only its center) might play an important role in the dynamics of SC activity. To test this hypothesis, we designed a model of the SC which is built upon three well accepted principles: the log-polar representation of the visual field onto the SC, the interplay between a center excitation and a surround inhibition and a simple neuronal dynamics, like the one proposed by the dynamic neural field theory. Results show that the retino-topic organization of the collicular activity conveys an implicit computation that deeply impacts the target selection process

    Computational Study of Multisensory Gaze-Shift Planning

    Get PDF
    In response to appearance of multimodal events in the environment, we often make a gaze-shift in order to focus the attention and gather more information. Planning such a gaze-shift involves three stages: 1) to determine the spatial location for the gaze-shift, 2) to find out the time to initiate the gaze-shift, 3) to work out a coordinated eye-head motion to execute the gaze-shift. There have been a large number of experimental investigations to inquire the nature of multisensory and oculomotor information processing in any of these three levels separately. Here in this thesis, we approach this problem as a single executive program and propose computational models for them in a unified framework. The first spatial problem is viewed as inferring the cause of cross-modal stimuli, whether or not they originate from a common source (chapter 2). We propose an evidence-accumulation decision-making framework, and introduce a spatiotemporal similarity measure as the criterion to choose to integrate the multimodal information or not. The variability of report of sameness, observed in experiments, is replicated as functions of the spatial and temporal patterns of target presentations. To solve the second temporal problem, a model is built upon the first decision-making structure (chapter 3). We introduce an accumulative measure of confidence on the chosen causal structure, as the criterion for initiation of action. We propose that gaze-shift is implemented when this confidence measure reaches a threshold. The experimentally observed variability of reaction time is simulated as functions of spatiotemporal and reliability features of the cross-modal stimuli. The third motor problem is considered to be solved downstream of the two first networks (chapter 4). We propose a kinematic strategy that coordinates eye-in-head and head-on-shoulder movements, in both spatial and temporal dimensions, in order to shift the line of sight towards the inferred position of the goal. The variabilities in contributions of eyes and head movements to gaze-shift are modeled as functions of the retinal error and the initial orientations of eyes and head. The three models should be viewed as parts of a single executive program that integrates perceptual and motor processing across time and space

    Spatial Transformations in Frontal Cortex During Memory-Guided Head-Unrestrained Gaze Shifts

    Get PDF
    We constantly orient our line of sight (i.e., gaze) to external objects in our environment. One of the central questions in sensorimotor neuroscience concerns how visual input (registered on retina) is transformed into appropriate signals that drive gaze shift, comprised of coordinated movement of the eyes and the head. In this dissertation I investigated the function of a node in the frontal cortex, known as the frontal eye field (FEF) by investigating the spatial transformations that occur within this structure. FEF is implicated as a key node in gaze control and part of the working memory network. I recorded the activity of single FEF neurons in head-unrestrained monkeys as they performed a simple memory-guided gaze task which required delayed gaze shifts (by a few hundred milliseconds) towards remembered visual stimuli. By utilizing an elaborate analysis method which fits spatial models to neuronal response fields, I identified the spatial code embedded in neuronal activity related to vision (visual response), memory (delay response), and gaze shift (movement response). First (Chapter 2), spatial transformations that occur within the FEF were identified by comparing spatial codes in visual and movement responses. I showed eye-centered dominance in both neuronal responses (and excluded head- and space-centered coding); however, whereas the visual response encoded target position, the movement response encoded the position of the imminent gaze shift (and not its independent eye and head components), and this was observed even within single neurons. In Chapter 3, I characterized the time-course for this target-to-gaze transition by identifying the spatial code during the intervening delay period. The results from this study highlighted two major transitions within the FEF: a gradual transition during the visual-delay-movement extent of delay-responsive neurons, followed by a discrete transition between delay-responsive neurons and pre-saccadic neurons that exclusively fire around the time of gaze movement. These results show that the FEF is involved in memory-based transformations in gaze control; but instead of encoding specific movement parameters (eye and head) it encodes the desired gaze endpoint. The representations of the movement goal are subject to noise and this noise accumulates at different stages related to different mechanisms

    Overlapping Structures in Sensory-Motor Mappings

    Get PDF
    This paper examines a biologically-inspired representation technique designed for the support of sensory-motor learning in developmental robotics. An interesting feature of the many topographic neural sheets in the brain is that closely packed receptive fields must overlap in order to fully cover a spatial region. This raises interesting scientific questions with engineering implications: e.g. is overlap detrimental? does it have any benefits? This paper examines the effects and properties of overlap between elements arranged in arrays or maps. In particular we investigate how overlap affects the representation and transmission of spatial location information on and between topographic maps. Through a series of experiments we determine the conditions under which overlap offers advantages and identify useful ranges of overlap for building mappings in cognitive robotic systems. Our motivation is to understand the phenomena of overlap in order to provide guidance for application in sensory-motor learning robots

    The Eye of a Mathematical Physicist

    Get PDF
    In this essay we are searching for neural correlates of ‘doing mathematical physics'. We introduce a toy model of a mathematical physicist, a brain connected with the outside world only by vision and saccadic eye movements and interacting with a computer screen. First, we describe the neuroanatomy of the visuo-saccadic system and Listing's law, which binds saccades and the optics of the eye. Then we explain space-time transformations in the superior colliculus, the performance of a canonical cortical circuit in the frontal eye field and finally the recurrent interaction of both areas, which leads to a coherent percept of space in spite of saccades. This sets the stage in the brain for doing mathematical physics, which is analyzed in simple example

    Interior maps in posterior pareital cortex

    Get PDF
    The posterior parietal cortex (PPC), historically believed to be a sensory structure, is now viewed as an area important for sensory-motor integration. Among its functions is the forming of intentions, that is, high-level cognitive plans for movement. There is a map of intentions within the PPC, with different subregions dedicated to the planning of eye movements, reaching movements, and grasping movements. These areas appear to be specialized for the multisensory integration and coordinate transformations required to convert sensory input to motor output. In several subregions of the PPC, these operations are facilitated by the use of a common distributed space representation that is independent of both sensory input and motor output. Attention and learning effects are also evident in the PPC. However, these effects may be general to cortex and operate in the PPC in the context of sensory-motor transformations

    The Peri-Saccadic Perception of Objects and Space

    Get PDF
    Eye movements affect object localization and object recognition. Around saccade onset, briefly flashed stimuli appear compressed towards the saccade target, receptive fields dynamically change position, and the recognition of objects near the saccade target is improved. These effects have been attributed to different mechanisms. We provide a unifying account of peri-saccadic perception explaining all three phenomena by a quantitative computational approach simulating cortical cell responses on the population level. Contrary to the common view of spatial attention as a spotlight, our model suggests that oculomotor feedback alters the receptive field structure in multiple visual areas at an intermediate level of the cortical hierarchy to dynamically recruit cells for processing a relevant part of the visual field. The compression of visual space occurs at the expense of this locally enhanced processing capacity

    A dynamic neural field approach to the covert and overt deployment of spatial attention

    Get PDF
    International audienceAbstract The visual exploration of a scene involves the in- terplay of several competing processes (for example to se- lect the next saccade or to keep fixation) and the integration of bottom-up (e.g. contrast) and top-down information (the target of a visual search task). Identifying the neural mech- anisms involved in these processes and in the integration of these information remains a challenging question. Visual attention refers to all these processes, both when the eyes remain fixed (covert attention) and when they are moving (overt attention). Popular computational models of visual attention consider that the visual information remains fixed when attention is deployed while the primates are executing around three saccadic eye movements per second, changing abruptly this information. We present in this paper a model relying on neural fields, a paradigm for distributed, asyn- chronous and numerical computations and show that covert and overt attention can emerge from such a substratum. We identify and propose a possible interaction of four elemen- tary mechanisms for selecting the next locus of attention, memorizing the previously attended locations, anticipating the consequences of eye movements and integrating bottom- up and top-down information in order to perform a visual search task with saccadic eye movements
    corecore