3,733 research outputs found

    Multi-Modal Human-Machine Communication for Instructing Robot Grasping Tasks

    Full text link
    A major challenge for the realization of intelligent robots is to supply them with cognitive abilities in order to allow ordinary users to program them easily and intuitively. One way of such programming is teaching work tasks by interactive demonstration. To make this effective and convenient for the user, the machine must be capable to establish a common focus of attention and be able to use and integrate spoken instructions, visual perceptions, and non-verbal clues like gestural commands. We report progress in building a hybrid architecture that combines statistical methods, neural networks, and finite state machines into an integrated system for instructing grasping tasks by man-machine interaction. The system combines the GRAVIS-robot for visual attention and gestural instruction with an intelligent interface for speech recognition and linguistic interpretation, and an modality fusion module to allow multi-modal task-oriented man-machine communication with respect to dextrous robot manipulation of objects.Comment: 7 pages, 8 figure

    Autonomous Mechanical Assembly on the Space Shuttle: An Overview

    Get PDF
    The space shuttle will be equipped with a pair of 50 ft. manipulators used to handle payloads and to perform mechanical assembly operations. Although current plans call for these manipulators to be operated by a human teleoperator. The possibility of using results from robotics and machine intelligence to automate this shuttle assembly system was investigated. The major components of an autonomous mechanical assembly system are examined, along with the technology base upon which they depend. The state of the art in advanced automation is also assessed

    Cooperative and Multimodal Capabilities Enhancement in the CERNTAURO Human–Robot Interface for Hazardous and Underwater Scenarios

    Get PDF
    The use of remote robotic systems for inspection and maintenance in hazardous environments is a priority for all tasks potentially dangerous for humans. However, currently available robotic systems lack that level of usability which would allow inexperienced operators to accomplish complex tasks. Moreover, the task’s complexity increases drastically when a single operator is required to control multiple remote agents (for example, when picking up and transporting big objects). In this paper, a system allowing an operator to prepare and configure cooperative behaviours for multiple remote agents is presented. The system is part of a human–robot interface that was designed at CERN, the European Center for Nuclear Research, to perform remote interventions in its particle accelerator complex, as part of the CERNTAURO project. In this paper, the modalities of interaction with the remote robots are presented in detail. The multimodal user interface enables the user to activate assisted cooperative behaviours according to a mission plan. The multi-robot interface has been validated at CERN in its Large Hadron Collider (LHC) mockup using a team of two mobile robotic platforms, each one equipped with a robotic manipulator. Moreover, great similarities were identified between the CERNTAURO and the TWINBOT projects, which aim to create usable robotic systems for underwater manipulations. Therefore, the cooperative behaviours were validated within a multi-robot pipe transport scenario in a simulated underwater environment, experimenting more advanced vision techniques. The cooperative teleoperation can be coupled with additional assisted tools such as vision-based tracking and grasping determination of metallic objects, and communication protocols design. The results show that the cooperative behaviours enable a single user to face a robotic intervention with more than one robot in a safer way

    Low-cost Sensor Glove with Force Feedback for Learning from Demonstrations using Probabilistic Trajectory Representations

    Full text link
    Sensor gloves are popular input devices for a large variety of applications including health monitoring, control of music instruments, learning sign language, dexterous computer interfaces, and tele-operating robot hands. Many commercial products as well as low-cost open source projects have been developed. We discuss here how low-cost (approx. 250 EUROs) sensor gloves with force feedback can be build, provide an open source software interface for Matlab and present first results in learning object manipulation skills through imitation learning on the humanoid robot iCub.Comment: 3 pages, 3 figures. Workshop paper of the International Conference on Robotics and Automation (ICRA 2015

    Learning for a robot:deep reinforcement learning, imitation learning, transfer learning

    Get PDF
    Dexterous manipulation of the robot is an important part of realizing intelligence, but manipulators can only perform simple tasks such as sorting and packing in a structured environment. In view of the existing problem, this paper presents a state-of-the-art survey on an intelligent robot with the capability of autonomous deciding and learning. The paper first reviews the main achievements and research of the robot, which were mainly based on the breakthrough of automatic control and hardware in mechanics. With the evolution of artificial intelligence, many pieces of research have made further progresses in adaptive and robust control. The survey reveals that the latest research in deep learning and reinforcement learning has paved the way for highly complex tasks to be performed by robots. Furthermore, deep reinforcement learning, imitation learning, and transfer learning in robot control are discussed in detail. Finally, major achievements based on these methods are summarized and analyzed thoroughly, and future research challenges are proposed

    Dynamic field theory (DFT): applications in Cognitive Science and Robotics

    Get PDF
    Review article about Dynamic Field theory and applications in cognitive science and roboticseuCognition : the European Network for Advancement of Artificial Cognitive System
    • …
    corecore