46 research outputs found

    TeLeMan: Teleoperation for Legged Robot Loco-Manipulation using Wearable IMU-based Motion Capture

    Get PDF
    Human life is invaluable. When dangerous or life-threatening tasks need to be completed, robotic platforms could be ideal in replacing human operators. Such a task that we focus on in this work is the Explosive Ordnance Disposal. Robot telepresence has the potential to provide safety solutions, given that mobile robots have shown robust capabilities when operating in several environments. However, autonomy may be challenging and risky at this stage, compared to human operation. Teleoperation could be a compromise between full robot autonomy and human presence. In this paper, we present a relatively cheap solution for telepresence and robot teleoperation, to assist with Explosive Ordnance Disposal, using a legged manipulator (i.e., a legged quadruped robot, embedded with a manipulator and RGB-D sensing). We propose a novel system integration for the non-trivial problem of quadruped manipulator whole-body control. Our system is based on a wearable IMU-based motion capture system that is used for teleoperation and a VR headset for visual telepresence. We experimentally validate our method in real-world, for loco-manipulation tasks that require whole-body robot control and visual telepresence

    Force-Guided High-Precision Grasping Control of Fragile and Deformable Objects Using sEMG-Based Force Prediction

    Get PDF
    Regulating contact forces with high precision is crucial for grasping and manipulating fragile or deformable objects. We aim to utilize the dexterity of human hands to regulate the contact forces for robotic hands and exploit human sensory-motor synergies in a wearable and non-invasive way. We extracted force information from the electric activities of skeletal muscles during their voluntary contractions through surface electromyography (sEMG). We built a regression model based on a Neural Network to predict the gripping force from the preprocessed sEMG signals and achieved high accuracy (R2 = 0.982). Based on the force command predicted from human muscles, we developed a force-guided control framework, where force control was realized via an admittance controller that tracked the predicted gripping force reference to grasp delicate and deformable objects. We demonstrated the effectiveness of the proposed method on a set of representative fragile and deformable objects from daily life, all of which were successfully grasped without any damage or deformation.Comment: 8 pages, 11 figures, to be published on IEEE Robotics and Automation Letters. For the attached video, see https://youtu.be/0AotKaWFJD

    Télé-opération Corps Complet de Robots Humanoïdes

    Get PDF
    This thesis aims to investigate systems and tools for teleoperating a humanoid robot. Robotteleoperation is crucial to send and control robots in environments that are dangerous or inaccessiblefor humans (e.g., disaster response scenarios, contaminated environments, or extraterrestrialsites). The term teleoperation most commonly refers to direct and continuous control of a robot.In this case, the human operator guides the motion of the robot with her/his own physical motionor through some physical input device. One of the main challenges is to control the robot in a waythat guarantees its dynamical balance while trying to follow the human references. In addition,the human operator needs some feedback about the state of the robot and its work site through remotesensors in order to comprehend the situation or feel physically present at the site, producingeffective robot behaviors. Complications arise when the communication network is non-ideal. Inthis case the commands from human to robot together with the feedback from robot to human canbe delayed. These delays can be very disturbing for the human operator, who cannot teleoperatetheir robot avatar in an effective way.Another crucial point to consider when setting up a teleoperation system is the large numberof parameters that have to be tuned to effectively control the teleoperated robots. Machinelearning approaches and stochastic optimizers can be used to automate the learning of some of theparameters.In this thesis, we proposed a teleoperation system that has been tested on the humanoid robotiCub. We used an inertial-technology-based motion capture suit as input device to control thehumanoid and a virtual reality headset connected to the robot cameras to get some visual feedback.We first translated the human movements into equivalent robot ones by developping a motionretargeting approach that achieves human-likeness while trying to ensure the feasibility of thetransferred motion. We then implemented a whole-body controller to enable the robot to trackthe retargeted human motion. The controller has been later optimized in simulation to achieve agood tracking of the whole-body reference movements, by recurring to a multi-objective stochasticoptimizer, which allowed us to find robust solutions working on the real robot in few trials.To teleoperate walking motions, we implemented a higher-level teleoperation mode in whichthe user can use a joystick to send reference commands to the robot. We integrated this setting inthe teleoperation system, which allows the user to switch between the two different modes.A major problem preventing the deployment of such systems in real applications is the presenceof communication delays between the human input and the feedback from the robot: evena few hundred milliseconds of delay can irremediably disturb the operator, let alone a few seconds.To overcome these delays, we introduced a system in which a humanoid robot executescommands before it actually receives them, so that the visual feedback appears to be synchronizedto the operator, whereas the robot executed the commands in the past. To do so, the robot continuouslypredicts future commands by querying a machine learning model that is trained on pasttrajectories and conditioned on the last received commands.Cette thĂšse vise Ă  Ă©tudier des systĂšmes et des outils pour la tĂ©lĂ©-opĂ©ration d’un robot humanoĂŻde.La tĂ©lĂ©opĂ©ration de robots est cruciale pour envoyer et contrĂŽler les robots dans des environnementsdangereux ou inaccessibles pour les humains (par exemple, des scĂ©narios d’interventionen cas de catastrophe, des environnements contaminĂ©s ou des sites extraterrestres). Le terme tĂ©lĂ©opĂ©rationdĂ©signe le plus souvent le contrĂŽle direct et continu d’un robot. Dans ce cas, l’opĂ©rateurhumain guide le mouvement du robot avec son propre mouvement physique ou via un dispositifde contrĂŽle. L’un des principaux dĂ©fis est de contrĂŽler le robot de maniĂšre Ă  garantir son Ă©quilibredynamique tout en essayant de suivre les rĂ©fĂ©rences humaines. De plus, l’opĂ©rateur humain abesoin d’un retour d’information sur l’état du robot et de son site via des capteurs Ă  distance afind’apprĂ©hender la situation ou de se sentir physiquement prĂ©sent sur le site, produisant des comportementsde robot efficaces. Des complications surviennent lorsque le rĂ©seau de communicationn’est pas idĂ©al. Dans ce cas, les commandes de l’homme au robot ainsi que la rĂ©troaction du robotĂ  l’homme peuvent ĂȘtre retardĂ©es. Ces dĂ©lais peuvent ĂȘtre trĂšs gĂȘnants pour l’opĂ©rateur humain,qui ne peut pas tĂ©lĂ©-opĂ©rer efficacement son avatar robotique.Un autre point crucial Ă  considĂ©rer lors de la mise en place d’un systĂšme de tĂ©lĂ©-opĂ©rationest le grand nombre de paramĂštres qui doivent ĂȘtre rĂ©glĂ©s pour contrĂŽler efficacement les robotstĂ©lĂ©-opĂ©rĂ©s. Des approches d’apprentissage automatique et des optimiseurs stochastiques peuventĂȘtre utilisĂ©s pour automatiser l’apprentissage de certains paramĂštres.Dans cette thĂšse, nous avons proposĂ© un systĂšme de tĂ©lĂ©-opĂ©ration qui a Ă©tĂ© testĂ© sur le robothumanoĂŻde iCub. Nous avons utilisĂ© une combinaison de capture de mouvement basĂ©e sur latechnologie inertielle comme pĂ©riphĂ©rique de contrĂŽle pour l’humanoĂŻde et un casque de rĂ©alitĂ©virtuelle connectĂ© aux camĂ©ras du robot pour obtenir un retour visuel. Nous avons d’abord traduitles mouvements humains en mouvements robotiques Ă©quivalents en dĂ©veloppant une approchede retargeting de mouvement qui atteint la ressemblance humaine tout en essayant d’assurer lafaisabilitĂ© du mouvement transfĂ©rĂ©. Nous avons ensuite implĂ©mentĂ© un contrĂŽleur du corps entierpour permettre au robot de suivre le mouvement humain reciblĂ©. Le contrĂŽleur a ensuite Ă©tĂ©optimisĂ© en simulation pour obtenir un bon suivi des mouvements de rĂ©fĂ©rence du corps entier,en recourant Ă  un optimiseur stochastique multi-objectifs, ce qui nous a permis de trouver dessolutions robustes fonctionnant sur le robot rĂ©el en quelques essais.Pour tĂ©lĂ©-opĂ©rer les mouvements de marche, nous avons implĂ©mentĂ© un mode de tĂ©lĂ©-opĂ©rationde niveau supĂ©rieur dans lequel l’utilisateur peut utiliser un joystick pour envoyer des commandesde rĂ©fĂ©rence au robot. Nous avons intĂ©grĂ© ce paramĂštre dans le systĂšme de tĂ©lĂ©-opĂ©ration, ce quipermet Ă  l’utilisateur de basculer entre les deux modes diffĂ©rents.Un problĂšme majeur empĂȘchant le dĂ©ploiement de tels systĂšmes dans des applications rĂ©ellesest la prĂ©sence de retards de communication entre l’entrĂ©e humaine et le retour du robot: mĂȘmequelques centaines de millisecondes de retard peuvent irrĂ©mĂ©diablement perturber l’opĂ©rateur,encore plus quelques secondes. Pour surmonter ces retards, nous avons introduit un systĂšme danslequel un robot humanoĂŻde exĂ©cute des commandes avant de les recevoir, de sorte que le retourvisuel semble ĂȘtre synchronisĂ© avec l’opĂ©rateur, alors que le robot exĂ©cutait les commandes dansle passĂ©. Pour ce faire, le robot prĂ©dit en permanence les commandes futures en interrogeant unmodĂšle d’apprentissage automatique formĂ© sur les trajectoires passĂ©es et conditionnĂ© aux derniĂšrescommandes reçues

    Exploring Robot Teleoperation in Virtual Reality

    Get PDF
    This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality. A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices. Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation. Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload. The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators. Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework. The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operators’ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours

    Dynamic virtual reality user interface for teleoperation of heterogeneous robot teams

    Full text link
    This research investigates the possibility to improve current teleoperation control for heterogeneous robot teams using modern Human-Computer Interaction (HCI) techniques such as Virtual Reality. It proposes a dynamic teleoperation Virtual Reality User Interface (VRUI) framework to improve the current approach to teleoperating heterogeneous robot teams

    An Augmented Reality Based Human-Robot Interaction Interface Using Kalman Filter Sensor Fusion

    Get PDF
    In this paper, the application of Augmented Reality (AR) for the control and adjustment of robots has been developed, with the aim of making interaction and adjustment of robots easier and more accurate from a remote location. A LeapMotion sensor based controller has been investigated to track the movement of the operator hands. The data from the controller allows gestures and the position of the hand palm’s central point to be detected and tracked. A Kinect V2 camera is able to measure the corresponding motion velocities in x, y, z directions after our investigated post-processing algorithm is fulfilled. Unreal Engine 4 is used to create an AR environment for the user to monitor the control process immersively. Kalman filtering (KF) algorithm is employed to fuse the position signals from the LeapMotion sensor with the velocity signals from the Kinect camera sensor, respectively. The fused/optimal data are sent to teleoperate a Baxter robot in real-time by User Datagram Protocol (UDP). Several experiments have been conducted to test the validation of the proposed method

    Kinematic Analysis and Experimental Verification of a Wearable Haptic Interface for a Tele-operated Robot

    Get PDF
    Department of Mechanical EngineeringIn the 21st century, the frequency of natural disaster has greatly increased. It is difficult for human to directly reach the disaster areas. A tele-operation system has been developed to perform tasks instead of human workers. For efficient control of the tele-operation system, a haptic interface is necessary. There are largely two types of interfaces for haptic feedback to the user: exoskeleton type interfaces and end-effector type interfaces (E-E interface). The exoskeleton type interfaces have several limitations including issues related to the transmission of the reaction force to the user and misalignment. The drawbacks of the researched E-E interface include restricted coverage of the entire range of arm movement of the user and unevenness of the maximum output range. For interface structure design, there are several conditions must be considered. In this thesis, 3 DOF kinematic structure design for a wearable haptic interface is proposed for intuitive control and improvement of task performance of a teleoperated robot. The user???s range of motion required to manipulate a tele-operated robot was assumed, and bent links of the interface were designed to avoid collision with the user. Simulations were conducted to verify that the proposed interface design covered the user???s range of motion. Based on this approach, a structure that satisfies approximately 95% of the range of motion was identified. Then a prototype was fabricated and evaluated while it was moved within the proposed range of motion. To lower the inertia of the interface actuation mechanism, a cable-driven actuation mechanism (CDAM) was utilized. The adopted CDAM uses series elastic actuator (SEA) and linear spring. The linear spring maintains minimum required pretension of the cable. A typical CDAM uses a sheath for routing a cable. However, the sheath routing method makes a high and non-linear friction between cable and sheath. To avoid this problem, in this research, the cable was routed from actuator to distal joint using a pulley structure without sheath. For the driving method, a proportional-integral-differential (PID) controller was adopted and for tuning the PID gain, a Ziegler???Nichols tuning method was used. In addition, integral anti-windup was used to prevent the error accumulation of the PID controller. To transmit the intended force to the user, a total of three types of residual forces (friction, gravity, and tension) were compensated. To determine whether the intended force was transmitted to the user, a virtual wall experiment was performed. To confirm the transmitted force, a force/torque sensor (F/T sensor) was attached to the handle of the interface. A peg-in-hole experiment was performed to verify if the efficiency of the tele-operation tasks improved when force feedback was provided to the user. The result revealed a reduction of the work time by approximately 32% and a reduction of impulse by approximately 70%.clos

    User-Centered Design and Evaluation of an Upper Limb Rehabilitation System with a Virtual Environment

    Get PDF
    Virtual environments (VEs) and haptic devices increase patients’ motivation. Furthermore, they observe their performance during rehabilitation. However, some of these technologies present disadvantages because they do not consider therapists’ needs and experience. This research presents the development and usability evaluation of an upper limb rehabilitation system based on a user-centered design approach for patients with moderate or mild stroke that can perform active rehabilitation. The system consists of a virtual environment with four virtual scenarios and a developed haptic device with vibrotactile feedback, and it can be visualized using a monitor or a Head-Mounted Display (HMD). Two evaluations were carried out; in the first one, five therapists evaluated the system’s usability using a monitor through the System Usability Scale, the user experience with the AttrakDiff questionnaire, and the functionality with customized items. As a result of these tests, improvements were made to the system. The second evaluation was carried out by ten volunteers who evaluated the usability, user experience, and performance with a monitor and HMD. A comparison of the therapist and volunteer scores has shown an increase in the usability evaluation (from 78 to >85), the hedonic score rose from 0.6 to 2.23, the pragmatic qualities from 1.25 to 2.20, and the attractiveness from 1.3 to 2.95. Additionally, the haptic device and the VE showed no relevant difference between their performance when using a monitor or HMD. The results show that the proposed system has the characteristics to be a helpful tool for therapists and upper limb rehabilitation

    A review on manipulation skill acquisition through teleoperation-based learning from demonstration

    Get PDF
    Manipulation skill learning and generalization have gained increasing attention due to the wide applications of robot manipulators and the spurt of robot learning techniques. Especially, the learning from demonstration method has been exploited widely and successfully in the robotic community, and it is regarded as a promising direction to realize the manipulation skill learning and generalization. In addition to the learning techniques, the immersive teleoperation enables the human to operate a remote robot with an intuitive interface and achieve the telepresence. Thus, it is a promising way to transfer manipulation skills from humans to robots by combining the learning methods and the teleoperation, and adapting the learned skills to different tasks in new situations. This review, therefore, aims to provide an overview of immersive teleoperation for skill learning and generalization to deal with complex manipulation tasks. To this end, the key technologies, e.g. manipulation skill learning, multimodal interfacing for teleoperation and telerobotic control, are introduced. Then, an overview is given in terms of the most important applications of immersive teleoperation platform for robot skill learning. Finally, this survey discusses the remaining open challenges and promising research topics

    Exploring Robot Teleoperation in Virtual Reality

    Get PDF
    This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality. A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices. Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation. Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload. The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators. Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework. The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operators’ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours
    corecore