23,487 research outputs found

    A Robotic System for Learning Visually-Driven Grasp Planning (Dissertation Proposal)

    Get PDF
    We use findings in machine learning, developmental psychology, and neurophysiology to guide a robotic learning system\u27s level of representation both for actions and for percepts. Visually-driven grasping is chosen as the experimental task since it has general applicability and it has been extensively researched from several perspectives. An implementation of a robotic system with a gripper, compliant instrumented wrist, arm and vision is used to test these ideas. Several sensorimotor primitives (vision segmentation and manipulatory reflexes) are implemented in this system and may be thought of as the innate perceptual and motor abilities of the system. Applying empirical learning techniques to real situations brings up such important issues as observation sparsity in high-dimensional spaces, arbitrary underlying functional forms of the reinforcement distribution and robustness to noise in exemplars. The well-established technique of non-parametric projection pursuit regression (PPR) is used to accomplish reinforcement learning by searching for projections of high-dimensional data sets that capture task invariants. We also pursue the following problem: how can we use human expertise and insight into grasping to train a system to select both appropriate hand preshapes and approaches for a wide variety of objects, and then have it verify and refine its skills through trial and error. To accomplish this learning we propose a new class of Density Adaptive reinforcement learning algorithms. These algorithms use statistical tests to identify possibly interesting regions of the attribute space in which the dynamics of the task change. They automatically concentrate the building of high resolution descriptions of the reinforcement in those areas, and build low resolution representations in regions that are either not populated in the given task or are highly uniform in outcome. Additionally, the use of any learning process generally implies failures along the way. Therefore, the mechanics of the untrained robotic system must be able to tolerate mistakes during learning and not damage itself. We address this by the use of an instrumented, compliant robot wrist that controls impact forces

    Multi-Modal Human-Machine Communication for Instructing Robot Grasping Tasks

    Full text link
    A major challenge for the realization of intelligent robots is to supply them with cognitive abilities in order to allow ordinary users to program them easily and intuitively. One way of such programming is teaching work tasks by interactive demonstration. To make this effective and convenient for the user, the machine must be capable to establish a common focus of attention and be able to use and integrate spoken instructions, visual perceptions, and non-verbal clues like gestural commands. We report progress in building a hybrid architecture that combines statistical methods, neural networks, and finite state machines into an integrated system for instructing grasping tasks by man-machine interaction. The system combines the GRAVIS-robot for visual attention and gestural instruction with an intelligent interface for speech recognition and linguistic interpretation, and an modality fusion module to allow multi-modal task-oriented man-machine communication with respect to dextrous robot manipulation of objects.Comment: 7 pages, 8 figure

    Robotic execution for everyday tasks by means of external vision/force control

    Get PDF
    In this article, we present an integrated manipulation framework for a service robot, that allows to interact with articulated objects at home environments through the coupling of vision and force modalities. We consider a robot which is observing simultaneously his hand and the object to manipulate, by using an external camera (i.e. robot head). Task-oriented grasping algorithms [1] are used in order to plan a suitable grasp on the object according to the task to perform. A new vision/force coupling approach [2], based on external control, is used in order to, first, guide the robot hand towards the grasp position and, second, perform the task taking into account external forces. The coupling between these two complementary sensor modalities provides the robot with robustness against uncertainties in models and positioning. A position-based visual servoing control law has been designed in order to continuously align the robot hand with respect to the object that is being manipulated, independently of camera position. This allows to freely move the camera while the task is being executed and makes this approach amenable to be integrated in current humanoid robots without the need of hand-eye calibration. Experimental results on a real robot interacting with different kind of doors are pre- sented

    Contribution of the posterior parietal cortex in reaching, grasping, and using objects and tools

    Get PDF
    Neuropsychological and neuroimaging data suggest a differential contribution of posterior parietal regions during the different components of a transitive gesture. Reaching requires the integration of object location and body position coordinates and reaching tasks elicit bilateral activation in different foci along the intraparietal sulcus. Grasping requires a visuomotor match between the object's shape and the hand's posture. Lesion studies and neuroimaging confirm the importance of the anterior part of the intraparietal sulcus for human grasping. Reaching and grasping reveal bilateral activation that is generally more prominent on the side contralateral to the hand used or the hemifield stimulated. Purposeful behavior with objects and tools can be assessed in a variety of ways, including actual use, pantomimed use, and pure imagery of manipulation. All tasks have been shown to elicit robust activation over the left parietal cortex in neuroimaging, but lesion studies hav e not always confirmed these findings. Compared to pantomimed or imagined gestures, actual object and tool use typically produces activation over the left primary somatosensory region. Neuroimaging studies on pantomiming or imagery of tool use in healthy volunteers revealed neural responses in possibly separate foci in the left supramarginal gyrus. In sum, the parietal contribution of reaching and grasping of objects seems to depend on a bilateral network of intraparietal foci that appear organized along gradients of sensory and effector preferences. Dorsal and medial parietal cortex appears to contribute to the online monitoring/adjusting of the ongoing prehensile action, whereas the functional use of objects and tools seems to involve the inferior lateral parietal cortex. This functional input reveals a clear left lateralized activation pattern that may be tuned to the integration of acquired knowledge in the planning and guidance of the transitive movement

    Human Management of the Hierarchical System for the Control of Multiple Mobile Robots

    Get PDF
    In order to take advantage of autonomous robotic systems, and yet ensure successful completion of all feasible tasks, we propose a mediation hierarchy in which an operator can interact at all system levels. Robotic systems are not robust in handling un-modeled events. Reactive behaviors may be able to guide the robot back into a modeled state and to continue. Reasoning systems may simply fail. Once a system has failed it is difficult to re-start the task from the failed state. Rather, the rule base is revised, programs altered, and the task re-tried from the beginning

    Towards an Indexical Model of Situated Language Comprehension for Cognitive Agents in Physical Worlds

    Full text link
    We propose a computational model of situated language comprehension based on the Indexical Hypothesis that generates meaning representations by translating amodal linguistic symbols to modal representations of beliefs, knowledge, and experience external to the linguistic system. This Indexical Model incorporates multiple information sources, including perceptions, domain knowledge, and short-term and long-term experiences during comprehension. We show that exploiting diverse information sources can alleviate ambiguities that arise from contextual use of underspecific referring expressions and unexpressed argument alternations of verbs. The model is being used to support linguistic interactions in Rosie, an agent implemented in Soar that learns from instruction.Comment: Advances in Cognitive Systems 3 (2014

    Biologically-Inspired 3D Grasp Synthesis Based on Visual Exploration

    Get PDF
    Object grasping is a typical human ability which is widely studied from both a biological and an engineering point of view. This paper presents an approach to grasp synthesis inspired by the human neurophysiology of actionoriented vision. Our grasp synthesis method is built upon an architecture which, taking into account the differences between robotic and biological systems, proposes an adaptation of brain models to the peculiarities of robotic setups. The architecture modularity allows for scalability and integration of complex robotic tasks. The grasp synthesis is designed as integrated with the extraction of a 3D object description, so that the object visual analysis is actively driven by the needs of the grasp synthesis: visual reconstruction is performed incrementally and selectively on the regions of the object that are considered more interesting for graspin

    Automated sequence and motion planning for robotic spatial extrusion of 3D trusses

    Full text link
    While robotic spatial extrusion has demonstrated a new and efficient means to fabricate 3D truss structures in architectural scale, a major challenge remains in automatically planning extrusion sequence and robotic motion for trusses with unconstrained topologies. This paper presents the first attempt in the field to rigorously formulate the extrusion sequence and motion planning (SAMP) problem, using a CSP encoding. Furthermore, this research proposes a new hierarchical planning framework to solve the extrusion SAMP problems that usually have a long planning horizon and 3D configuration complexity. By decoupling sequence and motion planning, the planning framework is able to efficiently solve the extrusion sequence, end-effector poses, joint configurations, and transition trajectories for spatial trusses with nonstandard topologies. This paper also presents the first detailed computation data to reveal the runtime bottleneck on solving SAMP problems, which provides insight and comparing baseline for future algorithmic development. Together with the algorithmic results, this paper also presents an open-source and modularized software implementation called Choreo that is machine-agnostic. To demonstrate the power of this algorithmic framework, three case studies, including real fabrication and simulation results, are presented.Comment: 24 pages, 16 figure
    • …
    corecore