21,645 research outputs found

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discotinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and VIP can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (NSF SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Summation of visual and mechanosensory feedback in Drosophila flight control

    Get PDF
    The fruit fly Drosophila melanogaster relies on feedback from multiple sensory modalities to control flight maneuvers. Two sensory organs, the compound eyes and mechanosensory hindwings called halteres, are capable of encoding angular velocity of the body during flight. Although motor reflexes driven by the two modalities have been studied individually, little is known about how the two sensory feedback channels are integrated during flight. Using a specialized flight simulator we presented tethered flies with simultaneous visual and mechanosensory oscillations while measuring compensatory changes in stroke kinematics. By varying the relative amplitude, phase and axis of rotation of the visual and mechanical stimuli, we were able to determine the contribution of each sensory modality to the compensatory motor reflex. Our results show that over a wide range of experimental conditions sensory inputs from halteres and the visual system are combined in a weighted sum. Furthermore, the weighting structure places greater influence on feedback from the halteres than from the visual system

    Force reflecting hand controller

    Get PDF
    A universal input device for interfacing a human operator with a slave machine such as a robot or the like includes a plurality of serially connected mechanical links extending from a base. A handgrip is connected to the mechanical links distal from the base such that a human operator may grasp the handgrip and control the position thereof relative to the base through the mechanical links. A plurality of rotary joints is arranged to connect the mechanical links together to provide at least three translational degrees of freedom and at least three rotational degrees of freedom of motion of the handgrip relative to the base. A cable and pulley assembly for each joint is connected to a corresponding motor for transmitting forces from the slave machine to the handgrip to provide kinesthetic feedback to the operator and for producing control signals that may be transmitted from the handgrip to the slave machine. The device gives excellent kinesthetic feedback, high-fidelity force/torque feedback, a kinematically simple structure, mechanically decoupled motion in all six degrees of freedom, and zero backlash. The device also has a much larger work envelope, greater stiffness and responsiveness, smaller stowage volume, and better overlap of the human operator's range of motion than previous designs

    A comparison of visual and haltere-mediated equilibrium reflexes in the fruit fly Drosophila melanogaster

    Get PDF
    Flies exhibit extraordinary maneuverability, relying on feedback from multiple sensory organs to control flight. Both the compound eyes and the mechanosensory halteres encode angular motion as the fly rotates about the three body axes during flight. Since these two sensory modalities differ in their mechanisms of transduction, they are likely to differ in their temporal responses. We recorded changes in stroke kinematics in response to mechanical and visual rotations delivered within a flight simulator. Our results show that the visual system is tuned to relatively slow rotation whereas the haltere-mediated response to mechanical rotation increases with rising angular velocity. The integration of feedback from these two modalities may enhance aerodynamic performance by enabling the fly to sense a wide range of angular velocities during flight

    A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes

    Full text link
    Animals avoid obstacles and approach goals in novel cluttered environments using visual information, notably optic flow, to compute heading, or direction of travel, with respect to objects in the environment. We present a neural model of how heading is computed that describes interactions among neurons in several visual areas of the primate magnocellular pathway, from retina through V1, MT+, and MSTd. The model produces outputs which are qualitatively and quantitatively similar to human heading estimation data in response to complex natural scenes. The model estimates heading to within 1.5° in random dot or photo-realistically rendered scenes and within 3° in video streams from driving in real-world environments. Simulated rotations of less than 1 degree per second do not affect model performance, but faster simulated rotation rates deteriorate performance, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National-Geospatial Intelligence Agency (NMA201-01-1-2016

    A spatial impedance controller for robotic manipulation

    Get PDF
    Mechanical impedance is the dynamic generalization of stiffness, and determines interactive behavior by definition. Although the argument for explicitly controlling impedance is strong, impedance control has had only a modest impact on robotic manipulator control practice. This is due in part to the fact that it is difficult to select suitable impedances given tasks. A spatial impedance controller is presented that simplifies impedance selection. Impedance is characterized using ¿spatially affine¿ families of compliance and damping, which are characterized by nonspatial and spatial parameters. Nonspatial parameters are selected independently of configuration of the object with which the robot must interact. Spatial parameters depend on object configurations, but transform in an intuitive, well-defined way. Control laws corresponding to these compliance and damping families are derived assuming a commonly used robot model. While the compliance control law was implemented in simulation and on a real robot, this paper emphasizes the underlying theor

    Encapsulating and representing the knowledge on the evaluation of an engineering system

    Get PDF
    This paper proposes a cross-disciplinary methodology for a fundamental question in product development: How can the innovation patterns during the evolution of an engineering system (ES) be encapsulated, so that it can later be mined through data mining analysis methods? Reverse engineering answers the question of which components a developed engineering system consists of, and how the components interact to make the working product. TRIZ answers the question of which problem-solving principles can be, or have been employed in developing that system, in comparison to its earlier versions, or with respect to similar systems. While these two methodologies have been very popular, to the best of our knowledge, there does not yet exist a methodology that reverseengineers and encapsulates and represents the information regarding the complete product development process in abstract terms. This paper suggests such a methodology, that consists of mathematical formalism, graph visualization, and database representation. The proposed approach is demonstrated by analyzing the design and development process for a prototype wrist-rehabilitation robot

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discontinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Navigating large-scale virtual environments: what differences occur between helmet-mounted and desk-top displays?

    Get PDF
    Participants used a helmet-mounted display (HMD) and a desk-top (monitor) display to learn the layouts of two large-scale virtual environments (VEs) through repeated, direct navigational experience. Both VEs were ‘‘virtual buildings’’ containing more than seventy rooms. Participants using the HMD navigated the buildings significantly more quickly and developed a significantly more accurate sense of relative straight-line distance. There was no significant difference between the two types of display in terms of the distance that participants traveled or the mean accuracy of their direction estimates. Behavioral analyses showed that participants took advantage of the natural, head-tracked interface provided by the HMD in ways that included ‘‘looking around’’more often while traveling through the VEs, and spending less time stationary in the VEs while choosing a direction in which to travel
    corecore