571 research outputs found

    Robot Composite Learning and the Nunchaku Flipping Challenge

    Full text link
    Advanced motor skills are essential for robots to physically coexist with humans. Much research on robot dynamics and control has achieved success on hyper robot motor capabilities, but mostly through heavily case-specific engineering. Meanwhile, in terms of robot acquiring skills in a ubiquitous manner, robot learning from human demonstration (LfD) has achieved great progress, but still has limitations handling dynamic skills and compound actions. In this paper, we present a composite learning scheme which goes beyond LfD and integrates robot learning from human definition, demonstration, and evaluation. The method tackles advanced motor skills that require dynamic time-critical maneuver, complex contact control, and handling partly soft partly rigid objects. We also introduce the "nunchaku flipping challenge", an extreme test that puts hard requirements to all these three aspects. Continued from our previous presentations, this paper introduces the latest update of the composite learning scheme and the physical success of the nunchaku flipping challenge

    On the Calibration of Active Binocular and RGBD Vision Systems for Dual-Arm Robots

    Get PDF
    This paper describes a camera and hand-eye calibration methodology for integrating an active binocular robot head within a dual-arm robot. For this purpose, we derive the forward kinematic model of our active robot head and describe our methodology for calibrating and integrating our robot head. This rigid calibration provides a closedform hand-to-eye solution. We then present an approach for updating dynamically camera external parameters for optimal 3D reconstruction that are the foundation for robotic tasks such as grasping and manipulating rigid and deformable objects. We show from experimental results that our robot head achieves an overall sub millimetre accuracy of less than 0.3 millimetres while recovering the 3D structure of a scene. In addition, we report a comparative study between current RGBD cameras and our active stereo head within two dual-arm robotic testbeds that demonstrates the accuracy and portability of our proposed methodology

    Robotic cloth manipulation for clothing assistance task using Dynamic Movement Primitives

    Get PDF
    The need of robotic clothing assistance in the field of assistive robotics is growing, as it is one of the most basic and essential assistance activities in daily life of elderly and disabled people. In this study we are investigating the applicability of using Dynamic Movement Primitives (DMP) as a task parameterization model for performing clothing assistance task. Robotic cloth manipulation task deals with putting a clothing article on both the arms. Robot trajectory varies significantly for various postures and also there can be various failure scenarios while doing cooperative manipulation with non-rigid and highly deformable clothing article. We have performed experiments on soft mannequin instead of human. Result shows that DMPs are able to generalize movement trajectory for modified posture.3rd International Conference of Robotics Society of India (AIR \u2717: Advances in Robotics), June 28 - July 2, 2017, New Delhi, Indi

    Design and Development of a Robot Guided Rehabilitation Scheme for Upper Extremity Rehabilitation

    Get PDF
    To rehabilitate individuals with impaired upper-limb function, we have designed and developed a robot guided rehabilitation scheme. A humanoid robot, NAO was used for this purpose. NAO has 25 degrees of freedom. With its sensors and actuators, it can walk forward and backward, can sit down and stand up, can wave his hand, can speak to the audience, can feel the touch sensation, and can recognize the person he is meeting. All these qualities have made NAO a perfect coach to guide the subjects to perform rehabilitation exercises. To demonstrate rehabilitation exercises with NAO, a library of recommended rehabilitation exercises involving shoulder (i.e., abduction/adduction, vertical flexion/extension, and internal/external rotation), and elbow (i.e., flexion/extension) joint movements was formed in Choregraphe (graphical programming interface). In experiments, NAO was maneuvered to instruct and demonstrate the exercises from the NRL. A complex ā€˜touch and playā€™ game was also developed where NAO plays with the subject that represents a multi-joint movementā€™s exercise. To develop the proposed tele-rehabilitation scheme, kinematic model of human upper-extremity was developed based modified Denavit-Hartenberg notations. A complete geometric solution was developed to find a unique inverse kinematic solution of human upper-extremity from the Kinect data. In tele-rehabilitation scheme, a therapist can remotely tele-operate the NAO in real-time to instruct and demonstrate subjects different arm movement exercises. Kinect sensor was used in this scheme to get tele-operatorā€™s kinematics data. Experiments results reveals that NAO can be tele-operated successfully to instruct and demonstrate subjects to perform different arm movement exercises. A control algorithm was developed in MATLAB for the proposed robot guided supervised rehabilitation scheme. Experimental results show that the NAO and Kinect sensor can effectively be used to supervise and guide the subjects in performing active rehabilitation exercises for shoulder and elbow joint movements

    Desenvolvimento de um sistema robĆ³tico de dois braƧos para imitaĆ§Ć£o gestual

    Get PDF
    Mestrado em Engenharia de AutomaĆ§Ć£o IndustrialA investigaĆ§Ć£o dedicada Ć  Ć”rea de robĆ³tica tem vindo a desempenhar um papel fundamental no que diz respeito Ć  interaĆ§Ć£o humano-robot. Esta interaĆ§Ć£o tem evoluĆ­do em aspetos como reconhecimento de voz, caminhar, imitaĆ§Ć£o gestual, exploraĆ§Ć£o e trabalho cooperativo. A aprendizagem por imitaĆ§Ć£o traz vĆ”rias vantagens em relaĆ§Ć£o aos mĆ©todos de programaĆ§Ć£o convencionais, pois possibilita a transferĆŖncia de novas habilidades ao robot atravĆ©s de uma interaĆ§Ć£o mais natural. O trabalho desenvolvido pretende a implementaĆ§Ć£o de um sistema robĆ³tico para imitaĆ§Ć£o gestual que sirva como base para o desenvolvimento de um sistema capaz de aprender recorrendo Ć  imitaĆ§Ć£o gestual de um humano. As demonstraƧƵes foram adquiridas recorrendo a um sistema de captura de movimento humano baseado no sensor Kinect. O sistema desenvolvido permite reproduzir os movimentos capturados num robot humanoide composto por dois braƧos Cyton Gamma 1500 em tempo real, respeitando as restriƧƵes fĆ­sicas e de espaƧo de trabalho do robot bem como prevenindo possĆ­veis colisƵes. Os braƧos robĆ³ticos foram fixados numa estrutura mecĆ¢nica, similar Ć  estrutura do torso humano, desenvolvida para o efeito. Foi estudada a cinemĆ”tica do manipulador com o objetivo de desenvolver algoritmos base de controlo. Estes foram desenvolvidos de forma modular de modo a criar um sistema que permite vĆ”rios modos de funcionamento independentes. Foram elaborados testes experimentais com o intuito de avaliar o desempenho do sistema em diferentes situaƧƵes. Estas estĆ£o relacionadas com limitaƧƵes fĆ­sicas associadas Ć  imitaĆ§Ć£o, como por exemplo: limites fĆ­sicos das juntas, limites de velocidade, limites do espaƧo de trabalho, configuraƧƵes singulares e colisƵes. Foram assim estudadas e implementadas soluƧƵes que permitem resolver estas situaƧƵes.Research in robotics has been playing an important role in human-robot interaction field. This interaction has evolved in several areas such as speech recognition, walking, gesture imitation, exploring and cooperative work. Imitation learning has several advantages over conventional programming methods because it allows the transfer of new skills to the robot through a more natural interaction. The work aims to implement a dual-arm manipulation system able to reproduce human gestures in real-time. The robotic arms are fixed to a mechanical structure similar to the human torso developed for this purpose. The demonstrations are obtained from a human motion capture system based on the Kinect sensor. The captured movements are reproduced in a two Cyton Gamma 1500 robotic arms assuming physical constraints and workspace limits, as well as avoiding self-collisions and singular configurations. The kinematics study of the robot arms provides the basis for the implementation of kinematics control algorithms. The software development is supported by the Robot Operating System (ROS) framework following the philosophy of modular and open-ended development. Several experimental tests are conducted to validate the proposed solutions and to evaluate the systemā€™s performance in different situations, including those related with joints physical limits, workspace limits, collisions and singularity avoidance

    Motion and emotion estimation for robotic autism intervention.

    Get PDF
    Robots have recently emerged as a novel approach to treating autism spectrum disorder (ASD). A robot can be programmed to interact with children with ASD in order to reinforce positive social skills in a non-threatening environment. In prior work, robots were employed in interaction sessions with ASD children, but their sensory and learning abilities were limited, while a human therapist was heavily involved in ā€œpuppeteeringā€ the robot. The objective of this work is to create the next-generation autism robot that includes several new interactive and decision-making capabilities that are not found in prior technology. Two of the main features that this robot would need to have is the ability to quantitatively estimate the patientā€™s motion performance and to correctly classify their emotions. This would allow for the potential diagnosis of autism and the ability to help autistic patients practice their skills. Therefore, in this thesis, we engineered components for a human-robot interaction system and confirmed them in experiments with the robots Baxter and Zeno, the sensors Empatica E4 and Kinect, and, finally, the open-source pose estimation software OpenPose. The Empatica E4 wristband is a wearable device that collects physiological measurements in real time from a test subject. Measurements were collected from ASD patients during human-robot interaction activities. Using this data and labels of attentiveness from a trained coder, a classifier was developed that provides a prediction of the patientā€™s level of engagement. The classifier outputs this prediction to a robot or supervising adult, allowing for decisions during intervention activities to keep the attention of the patient with autism. The CMU Perceptual Computing Labā€™s OpenPose software package enables body, face, and hand tracking using an RGB camera (e.g., web camera) or an RGB-D camera (e.g., Microsoft Kinect). Integrating OpenPose with a robot allows the robot to collect information on user motion intent and perform motion imitation. In this work, we developed such a teleoperation interface with the Baxter robot. Finally, a novel algorithm, called Segment-based Online Dynamic Time Warping (SoDTW), and metric are proposed to help in the diagnosis of ASD. Social Robot Zeno, a childlike robot developed by Hanson Robotics, was used to test this algorithm and metric. Using the proposed algorithm, it is possible to classify a subjectā€™s motion into different speeds or to use the resulting SoDTW score to evaluate the subjectā€™s abilities

    Robot skill learning through human demonstration and interaction

    Get PDF
    Nowadays robots are increasingly involved in more complex and less structured tasks. Therefore, it is highly desirable to develop new approaches to fast robot skill acquisition. This research is aimed to develop an overall framework for robot skill learning through human demonstration and interaction. Through low-level demonstration and interaction with humans, the robot can learn basic skills. These basic skills are treated as primitive actions. In high-level learning, the complex skills demonstrated by the human can be automatically translated into skill scripts which are executed by the robot. This dissertation summarizes my major research activities in robot skill learning. First, a framework for Programming by Demonstration (PbD) with reinforcement learning for human-robot collaborative manipulation tasks is described. With this framework, the robot can learn low level skills such as collaborating with a human to lift a table successfully and efficiently. Second, to develop a high-level skill acquisition system, we explore the use of a 3D sensor to recognize human actions. A Kinect based action recognition system is implemented which considers both object/action dependencies and the sequential constraints. Third, we extend the action recognition framework by fusing information from multimodal sensors which can recognize fine assembly actions. Fourth, a Portable Assembly Demonstration (PAD) system is built which can automatically generate skill scripts from human demonstration. The skill script includes the object type, the tool, the action used, and the assembly state. Finally, the generated skill scripts are implemented by a dual-arm robot. The proposed framework was experimentally evaluated

    High-precision grasping and placing for mobile robots

    Get PDF
    This work presents a manipulation system for multiple labware in life science laboratories using the H20 mobile robots. The H20 robot is equipped with the Kinect V2 sensor to identify and estimate the position of the required labware on the workbench. The local features recognition based on SURF algorithm is used. The recognition process is performed for the labware to be grasped and for the workbench holder. Different grippers and labware containers are designed to manipulate different weights of labware and to realize a safe transportation
    • ā€¦
    corecore