34 research outputs found

    Integrating Brain and Biomechanical Models—A New Paradigm for Understanding Neuro-muscular Control

    Get PDF
    To date, realistic models of how the central nervous system governs behavior have been restricted in scope to the brain, brainstem or spinal column, as if these existed as disembodied organs. Further, the model is often exercised in relation to an in vivo physiological experiment with input comprising an impulse, a periodic signal or constant activation, and output as a pattern of neural activity in one or more neural populations. Any link to behavior is inferred only indirectly via these activity patterns. We argue that to discover the principles of operation of neural systems, it is necessary to express their behavior in terms of physical movements of a realistic motor system, and to supply inputs that mimic sensory experience. To do this with confidence, we must connect our brain models to neuro-muscular models and provide relevant visual and proprioceptive feedback signals, thereby closing the loop of the simulation. This paper describes an effort to develop just such an integrated brain and biomechanical system using a number of pre-existing models. It describes a model of the saccadic oculomotor system incorporating a neuromuscular model of the eye and its six extraocular muscles. The position of the eye determines how illumination of a retinotopic input population projects information about the location of a saccade target into the system. A pre-existing saccadic burst generator model was incorporated into the system, which generated motoneuron activity patterns suitable for driving the biomechanical eye. The model was demonstrated to make accurate saccades to a target luminance under a set of environmental constraints. Challenges encountered in the development of this model showed the importance of this integrated modeling approach. Thus, we exposed shortcomings in individual model components which were only apparent when these were supplied with the more plausible inputs available in a closed loop design. Consequently we were able to suggest missing functionality which the system would require to reproduce more realistic behavior. The construction of such closed-loop animal models constitutes a new paradigm of computational neurobehavior and promises a more thoroughgoing approach to our understanding of the brain’s function as a controller for movement and behavior

    Contribution of the Primate Frontal Cortex to Eye Movements and Neuronal Activity in the Superior Colliculus

    Get PDF
    Humans and non-human primates must precisely align the eyes on an object to view it with high visual acuity. An important role of the oculomotor system is to generate accurate eye movements, such as saccades, toward a target. Given that each eye has only six muscles that rotate the eye in three degrees of freedom, this relatively simple volitional movement has allowed researchers to well-characterize the brain areas involved in their generation. In particular, the midbrain Superior Colliculus (SC), is recognized as having a primary role in the generation of visually-guided saccades via the integration of sensory and cognitive information. One important source of sensory and cognitive information to the SC is the Frontal Eye Fields (FEF). The role of the FEF and SC in visually-guided saccades has been well-studied using anatomical and functional techniques, but only a handful of studies have investigated how these areas work together to produce saccades. While it is assumed that the FEF exerts its influence on saccade generation though the SC, it remains unknown what happens in the SC when the FEF is suddenly inactivated. To test this prediction, I use the combined approach of FEF cryogenic inactivation and SC neuronal recordings, although it also provides a valuable opportunity to understand how FEF inputs to the SC govern saccade preparation. Nonetheless, it was first necessary to characterize the eye movement deficits following FEF inactivation, as it was unknown how a large and reversible FEF inactivation would influence saccade behaviour, or whether cortical areas influence fixational eye movements (e.g. microsaccades). Four major results emerged from this thesis. First, FEF inactivation delayed saccade reaction times (SRT) in both directions. Second, FEF inactivation impaired microsaccade generation and also selectively reduced microsaccades following peripheral cues. Third, FEF inactivation decreased visual, cognitive, and saccade-related activity in the ipsilesional SC. Fourth, the delayed onset of saccade-related SC activity best explained SRT increases during FEF inactivation, implicating one mechanism for how FEF inputs govern saccade preparation. Together, these results provide new insights into the FEF\u27s role in saccade and microsaccade behaviour, and how the oculomotor system commits to a saccade

    A Self-Organizing Neural Model of Motor Equivalent Reaching and Tool Use by a Multijoint Arm

    Full text link
    This paper describes a self-organizing neural model for eye-hand coordination. Called the DIRECT model, it embodies a solution of the classical motor equivalence problem. Motor equivalence computations allow humans and other animals to flexibly employ an arm with more degrees of freedom than the space in which it moves to carry out spatially defined tasks under conditions that may require novel joint configurations. During a motor babbling phase, the model endogenously generates movement commands that activate the correlated visual, spatial, and motor information that are used to learn its internal coordinate transformations. After learning occurs, the model is capable of controlling reaching movements of the arm to prescribed spatial targets using many different combinations of joints. When allowed visual feedback, the model can automatically perform, without additional learning, reaches with tools of variable lengths, with clamped joints, with distortions of visual input by a prism, and with unexpected perturbations. These compensatory computations occur within a single accurate reaching movement. No corrective movements are needed. Blind reaches using internal feedback have also been simulated. The model achieves its competence by transforming visual information about target position and end effector position in 3-D space into a body-centered spatial representation of the direction in 3-D space that the end effector must move to contact the target. The spatial direction vector is adaptively transformed into a motor direction vector, which represents the joint rotations that move the end effector in the desired spatial direction from the present arm configuration. Properties of the model are compared with psychophysical data on human reaching movements, neurophysiological data on the tuning curves of neurons in the monkey motor cortex, and alternative models of movement control.National Science Foundation (IRI 90-24877); Office of Naval Research (N00014-92-J-1309); Air Force Office of Scientific Research (F49620-92-J-0499); National Science Foundation (IRI 90-24877

    Reaching for the light: The prioritization of conspicuous visual stimuli for reflexive target-directed reaching

    Get PDF
    The degree to which something stands out against the background of its environment communicates important information. The phenomenon of camouflage is a testament of the degree to which visual salience and probability of survival tend to overlap. Salient stimuli often elicit fast, reflexive movements in order to catch prey or avoid a predator. The overarching goal of the work presented in this thesis is to investigate how the physical salience of visual stimuli influence the programming and execution of reaching movements. I approached this question by recording kinematics and muscle responses during reaching movements. Broadly, this thesis investigates the effect of the physical salience of targets on the magnitude and latency of involuntary, spatially tuned muscle responses toward those targets. In Chapters 2 and 3, subjects reached toward an array of potential targets on a touchscreen. The final target was cued only after the reaching movement was initiated. From trial to trial, targets differed in their numerosity (i.e., how many on the left versus the right) and in their salience (i.e., their relative contrast with the background). Different amounts of delay were introduced between the appearance of the targets and the cue to move. The results from these two studies demonstrate that the physical salience of (i.e., the luminance contrast differences between) targets influences the timing and the magnitude of involuntary deviations toward the most salient target(s) during reaching movements. At the level of individual subjects, the degree to which someone involuntarily reached toward the salient stimulus was predicted by the relationship between processing speeds for the different target contrasts. In Chapter 4, subjects reached toward individual targets that varied in luminance contrast. Muscle activity in the right pectoralis major was recorded with intramuscular electrodes. Consistent with past studies, there was a consistent muscle response that was time-locked to the appearance of the target, regardless of the reaction time for the ensuing reaching movement. The same processing speed differences and magnitude modulations observed in Chapters 2 and 3 (due to different luminance contrast values of the targets) were observed in these stimulus-locked muscle responses. Further testing revealed that stimulus-locked responses were elicited by a delayed, spatially uninformative go-cue

    Building Bridges between Perceptual and Economic Decision-Making: Neural and Computational Mechanisms

    Get PDF
    Investigation into the neural and computational bases of decision-making has proceeded in two parallel but distinct streams. Perceptual decision-making (PDM) is concerned with how observers detect, discriminate, and categorize noisy sensory information. Economic decision-making (EDM) explores how options are selected on the basis of their reinforcement history. Traditionally, the sub-fields of PDM and EDM have employed different paradigms, proposed different mechanistic models, explored different brain regions, disagreed about whether decisions approach optimality. Nevertheless, we argue that there is a common framework for understanding decisions made in both tasks, under which an agent has to combine sensory information (what is the stimulus) with value information (what is it worth). We review computational models of the decision process typically used in PDM, based around the idea that decisions involve a serial integration of evidence, and assess their applicability to decisions between good and gambles. Subsequently, we consider the contribution of three key brain regions – the parietal cortex, the basal ganglia, and the orbitofrontal cortex (OFC) – to perceptual and EDM, with a focus on the mechanisms by which sensory and reward information are integrated during choice. We find that although the parietal cortex is often implicated in the integration of sensory evidence, there is evidence for its role in encoding the expected value of a decision. Similarly, although much research has emphasized the role of the striatum and OFC in value-guided choices, they may play an important role in categorization of perceptual information. In conclusion, we consider how findings from the two fields might be brought together, in order to move toward a general framework for understanding decision-making in humans and other primates

    A Unified Cognitive Model of Visual Filling-In Based on an Emergic Network Architecture

    Get PDF
    The Emergic Cognitive Model (ECM) is a unified computational model of visual filling-in based on the Emergic Network architecture. The Emergic Network was designed to help realize systems undergoing continuous change. In this thesis, eight different filling-in phenomena are demonstrated under a regime of continuous eye movement (and under static eye conditions as well). ECM indirectly demonstrates the power of unification inherent with Emergic Networks when cognition is decomposed according to finer-grained functions supporting change. These can interact to raise additional emergent behaviours via cognitive re-use, hence the Emergic prefix throughout. Nevertheless, the model is robust and parameter free. Differential re-use occurs in the nature of model interaction with a particular testing paradigm. ECM has a novel decomposition due to the requirements of handling motion and of supporting unified modelling via finer functional grains. The breadth of phenomenal behaviour covered is largely to lend credence to our novel decomposition. The Emergic Network architecture is a hybrid between classical connectionism and classical computationalism that facilitates the construction of unified cognitive models. It helps cutting up of functionalism into finer-grains distributed over space (by harnessing massive recurrence) and over time (by harnessing continuous change), yet simplifies by using standard computer code to focus on the interaction of information flows. Thus while the structure of the network looks neurocentric, the dynamics are best understood in flowcentric terms. Surprisingly, dynamic system analysis (as usually understood) is not involved. An Emergic Network is engineered much like straightforward software or hardware systems that deal with continuously varying inputs. Ultimately, this thesis addresses the problem of reduction and induction over complex systems, and the Emergic Network architecture is merely a tool to assist in this epistemic endeavour. ECM is strictly a sensory model and apart from perception, yet it is informed by phenomenology. It addresses the attribution problem of how much of a phenomenon is best explained at a sensory level of analysis, rather than at a perceptual one. As the causal information flows are stable under eye movement, we hypothesize that they are the locus of consciousness, howsoever it is ultimately realized

    Cerebellar Codings for Control of Compensatory Eye Movements

    Get PDF
    This thesis focuses on the control of the cerebellum on motor behaviour, and more specifically on the role of the cerebellar Purkinje cells in exerting this control. As the cerebellum is an online control system, we look at both motor performance and learning, trying to identify components involved at the molecular, cellular and network level. To study the cerebellum we used the vestibulocerebellum, with visual and vestibular stimulation as input and eye movements as recorded output. The advantage of the vestibulocerebellum over other parts is that the input given is highly controllable, while the output can be reliably measured, and performance and learning can be easily studied. In addition, we conducted electrophysiological recordings from the vestibulocerebellum, in particular of Purkinje cells in the flocculus. Combining the spiking behaviour of Purkinje cells with visual input and eye movement output allowed us to study how the cerebellum functions and using genetically modified animals we could determine the role of different elements in this system. To provide some insights in the techniques used and the theory behind them, we will discuss the following topics in this introduction: compensatory eye movements, the anatomy of pathways to, within and out of the flocculus, the cellular physiology of Purkinje cells in relation to performance and the plasticity mechanisms related to motor learning

    Dynamic and Integrative Properties of the Primary Visual Cortex

    Get PDF
    The ability to derive meaning from complex, ambiguous sensory input requires the integration of information over both space and time, as well as cognitive mechanisms to dynamically shape that integration. We have studied these processes in the primary visual cortex (V1), where neurons have been proposed to integrate visual inputs along a geometric pattern known as the association field (AF). We first used cortical reorganization as a model to investigate the role that a specific network of V1 connections, the long-range horizontal connections, might play in temporal and spatial integration across the AF. When retinal lesions ablate sensory information from portions of the visual field, V1 undergoes a process of reorganization mediated by compensatory changes in the network of horizontal collaterals. The reorganization accompanies the brain’s amazing ability to perceptually “fill-inâ€, or “seeâ€, the lost visual input. We developed a computational model to simulate cortical reorganization and perceptual fill-in mediated by a plexus of horizontal connections that encode the AF. The model reproduces the major features of the perceptual fill-in reported by human subjects with retinal lesions, and it suggests that V1 neurons, empowered by their horizontal connections, underlie both perceptual fill-in and normal integrative mechanisms that are crucial to our visual perception. These results motivated the second prong of our work, which was to experimentally study the normal integration of information in V1. Since psychophysical and physiological studies suggest that spatial interactions in V1 may be under cognitive control, we investigated the integrative properties of V1 neurons under different cognitive states. We performed extracellular recordings from single V1 neurons in macaques that were trained to perform a delayed-match-to-sample contour detection task. We found that the ability of V1 neurons to summate visual inputs from beyond the classical receptive field (cRF) imbues them with selectivity for complex contour shapes, and that neuronal shape selectivity in V1 changed dynamically according to the shapes monkeys were cued to detect. Over the population, V1 encoded subsets of the AF, predicted by the computational model, that shifted as a function of the monkeys’ expectations. These results support the major conclusions of the theoretical work; even more, they reveal a sophisticated mode of form processing, whereby the selectivity of the whole network in V1 is reshaped by cognitive state

    26th Annual Computational Neuroscience Meeting (CNS*2017): Part 3 - Meeting Abstracts - Antwerp, Belgium. 15–20 July 2017

    Get PDF
    This work was produced as part of the activities of FAPESP Research,\ud Disseminations and Innovation Center for Neuromathematics (grant\ud 2013/07699-0, S. Paulo Research Foundation). NLK is supported by a\ud FAPESP postdoctoral fellowship (grant 2016/03855-5). ACR is partially\ud supported by a CNPq fellowship (grant 306251/2014-0)
    corecore