493,842 research outputs found

    Multi-level manual and autonomous control superposition for intelligent telerobot

    Get PDF
    Space telerobots are recognized to require cooperation with human operators in various ways. Multi-level manual and autonomous control superposition in telerobot task execution is described. The object model, the structured master-slave manipulation system, and the motion understanding system are proposed to realize the concept. The object model offers interfaces for task level and object level human intervention. The structured master-slave manipulation system offers interfaces for motion level human intervention. The motion understanding system maintains the consistency of the knowledge through all the levels which supports the robot autonomy while accepting the human intervention. The superposing execution of the teleoperational task at multi-levels realizes intuitive and robust task execution for wide variety of objects and in changeful environment. The performance of several examples of operating chemical apparatuses is shown

    Dexterity analysis and robot hand design

    Get PDF
    Understanding about a dexterous robot hand's motion ranges is important to the precision grasping and precision manipulation. A planar robot hand is studied for object orientation, including ranges of motion, measures with respect to the palm, position reaching of a point in the grasped object, and rotation of the object about the reference point. The rotational dexterity index and dexterity chart are introduced and an analysis procedure is developed for calculating these quantities. A design procedure for determining the hand kinematic parameters based on a desired partial or complete dexterity chart is also developed. These procedures have been tested in detail for a planar robot hand with two 2- or 3-link fingers. The derived results are shown to be useful to performance evaluation, kinematic parameter design, and grasping motion planning for a planar robot hand

    Understanding object motion encoding in the mammalian retina.

    Get PDF
    Phototransduction, transmission of visual information down the optic nerve incurs delays on the order of 50 – 100ms. This implies that the neuronal representation of a moving object should lag behind the object’s actual position. However, studies have demonstrated that the visual system compensates for neuronal delays using a predictive mechanism called phase advancing, which shifts the population response toward the leading edge of a moving object’s retinal image. To understand how this compensation is achieved in the retina, I investigated cellular and synaptic mechanisms that drive phase advancing. I used three approaches, each testing phase advancing at a different organizational level within the mouse retina. First, I studied phase advancing at the level of ganglion cell populations, using two-photon imaging of visually evoked calcium responses. I found populations of phase advancing OFF-type, ON-type, ON-OFF type, and horizontally tuned directionally selective ganglion cells. Second, I measured synaptic current responses of individual ganglion cells with patch-clamp electrophysiology, and I used a computational model to compare the observed responses to simulated responses based on the ganglion cell’s spatio-temporal receptive fields. Third, I tested whether phase advancing originates presynaptic to ganglion cells, by assessing phase advancing at the level of bipolar cell glutamate release using two-photon imaging of the glutamate biosensor iGluSnFR expressed in the inner plexiform layer. Based on the results of my experiments, I conclude that bipolar and ganglion cell receptive field structure generates phase advanced responses and acts to compensate for neuronal delays within the retina

    Newtonian Image Understanding: Unfolding the Dynamics of Objects in Static Images

    Full text link
    In this paper, we study the challenging problem of predicting the dynamics of objects in static images. Given a query object in an image, our goal is to provide a physical understanding of the object in terms of the forces acting upon it and its long term motion as response to those forces. Direct and explicit estimation of the forces and the motion of objects from a single image is extremely challenging. We define intermediate physical abstractions called Newtonian scenarios and introduce Newtonian Neural Network (N3N^3) that learns to map a single image to a state in a Newtonian scenario. Our experimental evaluations show that our method can reliably predict dynamics of a query object from a single image. In addition, our approach can provide physical reasoning that supports the predicted dynamics in terms of velocity and force vectors. To spur research in this direction we compiled Visual Newtonian Dynamics (VIND) dataset that includes 6806 videos aligned with Newtonian scenarios represented using game engines, and 4516 still images with their ground truth dynamics
    • …
    corecore