649 research outputs found

    Neural Network Modeling of Sensory-Motor Control in Animals

    Full text link
    National Science Foundation (IRI 90-24877, IRI 87-16960); Air Force Office of Scientific Research (F49620-92-J-0499); Office of Naval Research (N00014-92-J-1309

    A Self-Organizing Neural Network Model for Redundant Sensory-Motor Control, Motor Equivalence, and Tool Use

    Full text link
    A neural network is introduced which provides a solution of the classical motor equivalence problem, whereby many different joint configurations of a redundant manipulator can all be used to realize a desired trajectory in 3-D space. To do this, the network self-organizes a mapping from motion directions in 3-D space to velocity commands in joint space. Computer simulations demonstrate that, without any additional learning, the network can generate accurate movement commands that compensate for variable tool lengths, clamping of joints, distortions of visual input by a prism, and unexpected limb perturbations. Blind reaches have also been simulated.National Science Foundation (IRI-87-16960, IRI-90-24877

    Neural Representations for Sensory-Motor Control, III: Learning a Body-Centered Representation of 3-D Target Position

    Full text link
    A neural model is described of how the brain may autonomously learn a body-centered representation of 3-D target position by combining information about retinal target position, eye position, and head position in real time. Such a body-centered spatial representation enables accurate movement commands to the limbs to be generated despite changes in the spatial relationships between the eyes, head, body, and limbs through time. The model learns a vector representation--otherwise known as a parcellated distributed representation--of target vergence with respect to the two eyes, and of the horizontal and vertical spherical angles of the target with respect to a cyclopean egocenter. Such a vergence-spherical representation has been reported in the caudal midbrain and medulla of the frog, as well as in psychophysical movement studies in humans. A head-centered vergence-spherical representation of foveated target position can be generated by two stages of opponent processing that combine corollary discharges of outflow movement signals to the two eyes. Sums and differences of opponent signals define angular and vergence coordinates, respectively. The head-centered representation interacts with a binocular visual representation of non-foveated target position to learn a visuomotor representation of both foveated and non-foveated target position that is capable of commanding yoked eye movementes. This head-centered vector representation also interacts with representations of neck movement commands to learn a body-centered estimate of target position that is capable of commanding coordinated arm movements. Learning occurs during head movements made while gaze remains fixed on a foveated target. An initial estimate is stored and a VOR-mediated gating signal prevents the stored estimate from being reset during a gaze-maintaining head movement. As the head moves, new estimates arc compared with the stored estimate to compute difference vectors which act as error signals that drive the learning process, as well as control the on-line merging of multimodal information.Air Force Office of Scientific Research (F49620-92-J-0499); National Science Foundation (IRI -87-16960, IRI-90-24877); Office of Naval Research (N00014-92-J-l309

    A Self-Organizing Neural Model of Motor Equivalent Reaching and Tool Use by a Multijoint Arm

    Full text link
    This paper describes a self-organizing neural model for eye-hand coordination. Called the DIRECT model, it embodies a solution of the classical motor equivalence problem. Motor equivalence computations allow humans and other animals to flexibly employ an arm with more degrees of freedom than the space in which it moves to carry out spatially defined tasks under conditions that may require novel joint configurations. During a motor babbling phase, the model endogenously generates movement commands that activate the correlated visual, spatial, and motor information that are used to learn its internal coordinate transformations. After learning occurs, the model is capable of controlling reaching movements of the arm to prescribed spatial targets using many different combinations of joints. When allowed visual feedback, the model can automatically perform, without additional learning, reaches with tools of variable lengths, with clamped joints, with distortions of visual input by a prism, and with unexpected perturbations. These compensatory computations occur within a single accurate reaching movement. No corrective movements are needed. Blind reaches using internal feedback have also been simulated. The model achieves its competence by transforming visual information about target position and end effector position in 3-D space into a body-centered spatial representation of the direction in 3-D space that the end effector must move to contact the target. The spatial direction vector is adaptively transformed into a motor direction vector, which represents the joint rotations that move the end effector in the desired spatial direction from the present arm configuration. Properties of the model are compared with psychophysical data on human reaching movements, neurophysiological data on the tuning curves of neurons in the monkey motor cortex, and alternative models of movement control.National Science Foundation (IRI 90-24877); Office of Naval Research (N00014-92-J-1309); Air Force Office of Scientific Research (F49620-92-J-0499); National Science Foundation (IRI 90-24877

    Neural Representations for Sensory-Motor Control I: Head-Centered 3-D Target Positions from Opponent Eye Commands

    Full text link
    This article describes how corollary discharges from outflow eye movement commands can be transformed by two stages of opponent neural processing into a head-centered representation of 3-D target position. This representation implicitly defines a cyclopean coordinate system whose variables approximate the binocular vergence and spherical horizontal and vertical angles with respect to the observer's head. Various psychophysical data concerning binocular distance perception and reaching behavior are clarified by this representation. The representation provides a foundation for learning head-centered and body-centered invariant representations of both foveated and non-foveated 3-D target positions. It also enables a solution to be developed of the classical motor equivalence problem, whereby many different joint configurations of a redundant manipulator can all be used to realize a desired trajectory in 3-D space.Air Force Office of Scientific Research (URI 90-0175); Defense Advanced Research Projects Agency (AFOSR-90-0083); National Science Foundation (IRI-87-16960, IRI-90-24877

    Neural Representations for Sensory-Motor Control, II: Learning a Head-Centered Visuomotor Representation of 3-D Target Position

    Full text link
    A neural network model is described for how an invariant head-centered representation of 3-D target position can be autonomously learned by the brain in real time. Once learned, such a target representation may be used to control both eye and limb movements. The target representation is derived from the positions of both eyes in the head, and the locations which the target activates on the retinas of both eyes. A Vector Associative Map, or YAM, learns the many-to-one transformation from multiple combinations of eye-and-retinal position to invariant 3-D target position. Eye position is derived from outflow movement signals to the eye muscles. Two successive stages of opponent processing convert these corollary discharges into a. head-centered representation that closely approximates the azimuth, elevation, and vergence of the eyes' gaze position with respect to a cyclopean origin located between the eyes. YAM learning combines this cyclopean representation of present gaze position with binocular retinal information about target position into an invariant representation of 3-D target position with respect to the head. YAM learning can use a teaching vector that is externally derived from the positions of the eyes when they foveate the target. A YAM can also autonomously discover and learn the invariant representation, without an explicit teacher, by generating internal error signals from environmental fluctuations in which these invariant properties are implicit. YAM error signals are computed by Difference Vectors, or DVs, that are zeroed by the YAM learning process. YAMs may be organized into YAM Cascades for learning and performing both sensory-to-spatial maps and spatial-to-motor maps. These multiple uses clarify why DV-type properties are computed by cells in the parietal, frontal, and motor cortices of many mammals. YAMs are modulated by gating signals that express different aspects of the will-to-act. These signals transform a single invariant representation into movements of different speed (GO signal) and size (GRO signal), and thereby enable YAM controllers to match a planned action sequence to variable environmental conditions.National Science Foundation (IRI-87-16960, IRI-90-24877); Office of Naval Research (N00014-92-J-1309

    A Self-Organizing Neural Network for Learning a Body-Centered Invariant Representation of 3-D Target Position

    Full text link
    This paper describes a self-organizing neural network that rapidly learns a body-centered representation of 3-D target positions. This representation remains invariant under head and eye movements, and is a key component of sensory-motor systems for producing motor equivalent reaches to targets (Bullock, Grossberg, and Guenther, 1993).National Science Foundation (IRI-87-16960, IRI-90-24877); Air Force Office of Scientific Research (F49620-92-J-0499

    Seasonal and interannual variability of North American isoprene emissions as determined by formaldehyde column measurements from space

    Get PDF
    Formaldehyde (HCHO) columns measured from space by solar UV backscatter allow mapping of reactive hydrocarbon emissions. The principal contributor to these emissions during the growing season is the biogenic hydrocarbon isoprene, which is of great importance for driving regional and global tropospheric chemistry. We present seven years (1995-2001) of HCHO column data for North America from the Global Ozone Monitoring Experiment (GOME), and show that the general seasonal and interannual variability of these data is consistent with knowledge of isoprene emission. There are some significant regional discrepancies with the seasonal patterns predicted from current isoprene emission models, and we suggest that these may reflect flaws in the models. The interannual variability of HCHO columns observed by GOME appears to follow the interannual variability of surface temperature, as expected from current isoprene emission models

    RFID Controlled Door Lock

    Get PDF
    This device is an automated door lock attachment for an existing deadbolt-locked door. The device takes an exterior RFID input and can both lock and unlock the door. The device also implements an exterior doorbell switch where a guest can turn on a buzzer, and an interior switch for the operator to unlock the door from the inside. The front interface of the device is pictured in Figure 1; the working mechanism to turn the lock is pictured in Figure 2, and the housing for the processing and wirings is pictured in Figure 3. The logic is implemented using a PIC16F88 and Arduino Nano. The code implemented on the PIC is located in Appendix A1.1; the code for the Arduino is in A1.2; and the wiring diagram that ties the entire design together is in A2
    • …
    corecore