231 research outputs found

    Migration from Teleoperation to Autonomy via Modular Sensor and Mobility Bricks

    Get PDF
    In this thesis, the teleoperated communications of a Remotec ANDROS robot have been reverse engineered. This research has used the information acquired through the reverse engineering process to enhance the teleoperation and add intelligence to the initially automated robot. The main contribution of this thesis is the implementation of the mobility brick paradigm, which enables autonomous operations, using the commercial teleoperated ANDROS platform. The brick paradigm is a generalized architecture for a modular approach to robotics. This architecture and the contribution of this thesis are a paradigm shift from the proprietary commercial models that exist today. The modular system of sensor bricks integrates the transformed mobility platform and defines it as a mobility brick. In the wall following application implemented in this work, the mobile robotic system acquires intelligence using the range sensor brick. This application illustrates a way to alleviate the burden on the human operator and delegate certain tasks to the robot. Wall following is one among several examples of giving a degree of autonomy to an essentially teleoperated robot through the Sensor Brick System. Indeed once the proprietary robot has been altered into a mobility brick; the possibilities for autonomy are numerous and vary with different sensor bricks. The autonomous system implemented is not a fixed-application robot but rather a non-specific autonomy capable platform. Meanwhile the native controller and the computer-interfaced teleoperation are still available when necessary. Rather than trading off by switching from teleoperation to autonomy, this system provides the flexibility to switch between the two at the operator’s command. The contributions of this thesis reside in the reverse engineering of the original robot, its upgrade to a computer-interfaced teleoperated system, the mobility brick paradigm and the addition of autonomy capabilities. The application of a robot autonomously following a wall is subsequently implemented, tested and analyzed in this work. The analysis provides the programmer with information on controlling the robot and launching the autonomous function. The results are conclusive and open up the possibilities for a variety of autonomous applications for mobility platforms using modular sensor bricks

    Towards Supervising Remote Dexterous Robots Across Time Delay

    Get PDF
    The President s Vision for Space Exploration, laid out in 2004, relies heavily upon robotic exploration of the lunar surface in early phases of the program. Prior to the arrival of astronauts on the lunar surface, these robots will be required to be controlled across space and time, posing a considerable challenge for traditional telepresence techniques. Because time delays will be measured in seconds, not minutes as is the case for Mars Exploration, uploading the plan for a day seems excessive. An approach for controlling dexterous robots under intermediate time delay is presented, in which software running within a ground control cockpit predicts the intention of an immersed robot supervisor, then the remote robot autonomously executes the supervisor s intended tasks. Initial results are presented

    An Object Template Approach to Manipulation for Semi-autonomous Avatar Robots

    Get PDF
    Nowadays, the first steps towards the use of mobile robots to perform manipulation tasks in remote environments have been made possible. This opens new possibilities for research and development, since robots can help humans to perform tasks in many scenarios. A remote robot can be used as avatar in applications such as for medical or industrial use, in rescue and disaster recovery tasks which might be hazardous environments for human beings to enter, as well as for more distant scenarios like planetary explorations. Among the most typical applications in recent years, research towards the deployment of robots to mitigate disaster scenarios has been of great interest in the robotics field. Disaster scenarios present challenges that need to be tackled. Their unstructured nature makes them difficult to predict and even though some assumptions can be made for human-designed scenarios, there is no certainty on the expected conditions. Communications with a robot inside these scenarios might also be challenged; wired communications limit reachability and wireless communications are limited by bandwidth. Despite the great progress in the robotics research field, these difficulties have prevented the current autonomous robotic approaches to perform efficiently in unstructured remote scenarios. On one side, acquiring physical and abstract information from unknown objects in a full autonomous way in uncontrolled environmental conditions is still an unsolved problem. Several challenges have to be overcome such as object recognition, grasp planning, manipulation, and mission planning among others. On the other side, purely teleoperated robots require a reliable communication link robust to reachability, bandwidth, and latency which can provide all the necessary feedback that a human operator needs in order to achieve sufficiently good situational awareness, e.g., worldmodel, robot state, forces, and torques exerted. Processing this amount of information plus the necessary training to perform joint motions with the robot represent a high mental workload for the operator which results in very low execution times. Additionally, a pure teleoperated approach is error-prone given that the success in a manipulation task strongly depends on the ability and expertise of the human operating the robot. Both, autonomous and teleoperated robotic approaches have pros and cons, for this reason a middle ground approach has emerged. In an approach where a human supervises a semi-autonomous remote robot, strengths from both, full autonomous and purely teleoperated approaches can be combined while at the same time their weaknesses can be tackled. A remote manipulation task can be divided into sub-tasks such as planning, perception, action, and evaluation. A proper distribution of these sub-tasks between the human operator and the remote robot can increase the efficiency and potential of success in a manipulation task. On the one hand, a human operator can trivially plan a task (planning), identify objects in the sensor data acquired by the robot (perception), and verify the completion of a task (evaluation). On the other hand, it is challenging to remotely control in joint space a robotic system like a humanoid robot that can easily have over 25 degrees of freedom (DOF). For this reason, in this approach the complex sub-tasks such as motion planning, motion execution, and obstacle avoidance (action) are performed autonomously by the remote robot. With this distribution of tasks, the challenge of converting the operator intent into a robot action arises. This thesis investigates concepts of how to efficiently provide a remote robot with the operator intent in a flexible means of interaction. While current approaches focus on an object-grasp-centered means of interaction, this thesis aims at providing physical and abstract properties of the objects of interest. With this information, the robot can perform autonomous subtasks like locomotion through the environment, grasping objects, and manipulating them at an affordance-level avoiding collisions with the environment in order to efficiently accomplish the manipulation task needed. For this purpose, the concept of Object Template (OT) has been developed in this thesis. An OT is a virtual representation of an object of interest that contains information that a remote robot can use to manipulate such object or other similar objects. The object template concept presented here goes beyond state-of-the-art related concepts by extending the robot capabilities to use affordance information of the object. This concept includes physical information (mass, center of mass, inertia tensor) as well as abstract information (potential grasps, affordances, and usabilities). Because humans are very good at analysing a situation, planning new ways of how to solve a task, even using objects for different purposes, it is important to allow communicating the planning and perception performed by the operator such that the robot can execute the action based on the information contained in the OT. This leverages human intelligence with robot capabilities. For example, as an implementation in a 3D environment, an OT can be visualized as a 3D geometry mesh that simulates an object of interest. A human operator can manipulate the OT and move it so that it overlaps with the visualized sensor data of the real object. Information of the object template type and its pose can be compressed and sent using low bandwidth communication. Then, the remote robot can use the information of the OT to approach, grasp, and manipulate the real object. The use of remote humanoid robots as avatars is expected to be intuitive to operators (or potential human response forces) since the kinematic chains and degrees of freedom are similar to humans. This allows operators to visualize themselves in the remote environment and think how to solve a task, however, task requirements such as special tools might not be found. For this reason, a flexible means of interaction that can account for allowing improvisation from the operator is also needed. In this approach, improvisation is described as "a change of a plan on how to achieve a certain task, depending on the current situation". A human operator can then improvise by adapting the affordances of known objects into new unknown objects. For example, by utilizing the affordances defined in an OT on a new object that has similar physical properties or which manipulation skills belong to the same class. The experimental results presented in this thesis validate the proposed approach by demonstrating the successful achievement of several manipulation tasks using object templates. Systematic laboratory experimentation has been performed to evaluate the individual aspects of this approach. The performance of the approach has been tested in three different humanoid robotic systems (one of these robots belongs to another research laboratory). These three robotic platforms also participated in the renowned international competition DARPA Robotics Challenge (DRC) which between 2012 and 2015 was considered the most ambitious and challenging robotic competition

    An Architecture for Online Affordance-based Perception and Whole-body Planning

    Get PDF
    The DARPA Robotics Challenge Trials held in December 2013 provided a landmark demonstration of dexterous mobile robots executing a variety of tasks aided by a remote human operator using only data from the robot's sensor suite transmitted over a constrained, field-realistic communications link. We describe the design considerations, architecture, implementation and performance of the software that Team MIT developed to command and control an Atlas humanoid robot. Our design emphasized human interaction with an efficient motion planner, where operators expressed desired robot actions in terms of affordances fit using perception and manipulated in a custom user interface. We highlight several important lessons we learned while developing our system on a highly compressed schedule

    Model-Augmented Haptic Telemanipulation: Concept, Retrospective Overview, and Current Use Cases

    Get PDF
    Certain telerobotic applications, including telerobotics in space, pose particularly demanding challenges to both technology and humans. Traditional bilateral telemanipulation approaches often cannot be used in such applications due to technical and physical limitations such as long and varying delays, packet loss, and limited bandwidth, as well as high reliability, precision, and task duration requirements. In order to close this gap, we research model-augmented haptic telemanipulation (MATM) that uses two kinds of models: a remote model that enables shared autonomous functionality of the teleoperated robot, and a local model that aims to generate assistive augmented haptic feedback for the human operator. Several technological methods that form the backbone of the MATM approach have already been successfully demonstrated in accomplished telerobotic space missions. On this basis, we have applied our approach in more recent research to applications in the fields of orbital robotics, telesurgery, caregiving, and telenavigation. In the course of this work, we have advanced specific aspects of the approach that were of particular importance for each respective application, especially shared autonomy, and haptic augmentation. This overview paper discusses the MATM approach in detail, presents the latest research results of the various technologies encompassed within this approach, provides a retrospective of DLR's telerobotic space missions, demonstrates the broad application potential of MATM based on the aforementioned use cases, and outlines lessons learned and open challenges

    Supervising Remote Humanoids Across Intermediate Time Delay

    Get PDF
    The President's Vision for Space Exploration, laid out in 2004, relies heavily upon robotic exploration of the lunar surface in early phases of the program. Prior to the arrival of astronauts on the lunar surface, these robots will be required to be controlled across space and time, posing a considerable challenge for traditional telepresence techniques. Because time delays will be measured in seconds, not minutes as is the case for Mars Exploration, uploading the plan for a day seems excessive. An approach for controlling humanoids under intermediate time delay is presented. This approach uses software running within a ground control cockpit to predict an immersed robot supervisor's motions which the remote humanoid autonomously executes. Initial results are presented

    Télé-opération Corps Complet de Robots Humanoïdes

    Get PDF
    This thesis aims to investigate systems and tools for teleoperating a humanoid robot. Robotteleoperation is crucial to send and control robots in environments that are dangerous or inaccessiblefor humans (e.g., disaster response scenarios, contaminated environments, or extraterrestrialsites). The term teleoperation most commonly refers to direct and continuous control of a robot.In this case, the human operator guides the motion of the robot with her/his own physical motionor through some physical input device. One of the main challenges is to control the robot in a waythat guarantees its dynamical balance while trying to follow the human references. In addition,the human operator needs some feedback about the state of the robot and its work site through remotesensors in order to comprehend the situation or feel physically present at the site, producingeffective robot behaviors. Complications arise when the communication network is non-ideal. Inthis case the commands from human to robot together with the feedback from robot to human canbe delayed. These delays can be very disturbing for the human operator, who cannot teleoperatetheir robot avatar in an effective way.Another crucial point to consider when setting up a teleoperation system is the large numberof parameters that have to be tuned to effectively control the teleoperated robots. Machinelearning approaches and stochastic optimizers can be used to automate the learning of some of theparameters.In this thesis, we proposed a teleoperation system that has been tested on the humanoid robotiCub. We used an inertial-technology-based motion capture suit as input device to control thehumanoid and a virtual reality headset connected to the robot cameras to get some visual feedback.We first translated the human movements into equivalent robot ones by developping a motionretargeting approach that achieves human-likeness while trying to ensure the feasibility of thetransferred motion. We then implemented a whole-body controller to enable the robot to trackthe retargeted human motion. The controller has been later optimized in simulation to achieve agood tracking of the whole-body reference movements, by recurring to a multi-objective stochasticoptimizer, which allowed us to find robust solutions working on the real robot in few trials.To teleoperate walking motions, we implemented a higher-level teleoperation mode in whichthe user can use a joystick to send reference commands to the robot. We integrated this setting inthe teleoperation system, which allows the user to switch between the two different modes.A major problem preventing the deployment of such systems in real applications is the presenceof communication delays between the human input and the feedback from the robot: evena few hundred milliseconds of delay can irremediably disturb the operator, let alone a few seconds.To overcome these delays, we introduced a system in which a humanoid robot executescommands before it actually receives them, so that the visual feedback appears to be synchronizedto the operator, whereas the robot executed the commands in the past. To do so, the robot continuouslypredicts future commands by querying a machine learning model that is trained on pasttrajectories and conditioned on the last received commands.Cette thĂšse vise Ă  Ă©tudier des systĂšmes et des outils pour la tĂ©lĂ©-opĂ©ration d’un robot humanoĂŻde.La tĂ©lĂ©opĂ©ration de robots est cruciale pour envoyer et contrĂŽler les robots dans des environnementsdangereux ou inaccessibles pour les humains (par exemple, des scĂ©narios d’interventionen cas de catastrophe, des environnements contaminĂ©s ou des sites extraterrestres). Le terme tĂ©lĂ©opĂ©rationdĂ©signe le plus souvent le contrĂŽle direct et continu d’un robot. Dans ce cas, l’opĂ©rateurhumain guide le mouvement du robot avec son propre mouvement physique ou via un dispositifde contrĂŽle. L’un des principaux dĂ©fis est de contrĂŽler le robot de maniĂšre Ă  garantir son Ă©quilibredynamique tout en essayant de suivre les rĂ©fĂ©rences humaines. De plus, l’opĂ©rateur humain abesoin d’un retour d’information sur l’état du robot et de son site via des capteurs Ă  distance afind’apprĂ©hender la situation ou de se sentir physiquement prĂ©sent sur le site, produisant des comportementsde robot efficaces. Des complications surviennent lorsque le rĂ©seau de communicationn’est pas idĂ©al. Dans ce cas, les commandes de l’homme au robot ainsi que la rĂ©troaction du robotĂ  l’homme peuvent ĂȘtre retardĂ©es. Ces dĂ©lais peuvent ĂȘtre trĂšs gĂȘnants pour l’opĂ©rateur humain,qui ne peut pas tĂ©lĂ©-opĂ©rer efficacement son avatar robotique.Un autre point crucial Ă  considĂ©rer lors de la mise en place d’un systĂšme de tĂ©lĂ©-opĂ©rationest le grand nombre de paramĂštres qui doivent ĂȘtre rĂ©glĂ©s pour contrĂŽler efficacement les robotstĂ©lĂ©-opĂ©rĂ©s. Des approches d’apprentissage automatique et des optimiseurs stochastiques peuventĂȘtre utilisĂ©s pour automatiser l’apprentissage de certains paramĂštres.Dans cette thĂšse, nous avons proposĂ© un systĂšme de tĂ©lĂ©-opĂ©ration qui a Ă©tĂ© testĂ© sur le robothumanoĂŻde iCub. Nous avons utilisĂ© une combinaison de capture de mouvement basĂ©e sur latechnologie inertielle comme pĂ©riphĂ©rique de contrĂŽle pour l’humanoĂŻde et un casque de rĂ©alitĂ©virtuelle connectĂ© aux camĂ©ras du robot pour obtenir un retour visuel. Nous avons d’abord traduitles mouvements humains en mouvements robotiques Ă©quivalents en dĂ©veloppant une approchede retargeting de mouvement qui atteint la ressemblance humaine tout en essayant d’assurer lafaisabilitĂ© du mouvement transfĂ©rĂ©. Nous avons ensuite implĂ©mentĂ© un contrĂŽleur du corps entierpour permettre au robot de suivre le mouvement humain reciblĂ©. Le contrĂŽleur a ensuite Ă©tĂ©optimisĂ© en simulation pour obtenir un bon suivi des mouvements de rĂ©fĂ©rence du corps entier,en recourant Ă  un optimiseur stochastique multi-objectifs, ce qui nous a permis de trouver dessolutions robustes fonctionnant sur le robot rĂ©el en quelques essais.Pour tĂ©lĂ©-opĂ©rer les mouvements de marche, nous avons implĂ©mentĂ© un mode de tĂ©lĂ©-opĂ©rationde niveau supĂ©rieur dans lequel l’utilisateur peut utiliser un joystick pour envoyer des commandesde rĂ©fĂ©rence au robot. Nous avons intĂ©grĂ© ce paramĂštre dans le systĂšme de tĂ©lĂ©-opĂ©ration, ce quipermet Ă  l’utilisateur de basculer entre les deux modes diffĂ©rents.Un problĂšme majeur empĂȘchant le dĂ©ploiement de tels systĂšmes dans des applications rĂ©ellesest la prĂ©sence de retards de communication entre l’entrĂ©e humaine et le retour du robot: mĂȘmequelques centaines de millisecondes de retard peuvent irrĂ©mĂ©diablement perturber l’opĂ©rateur,encore plus quelques secondes. Pour surmonter ces retards, nous avons introduit un systĂšme danslequel un robot humanoĂŻde exĂ©cute des commandes avant de les recevoir, de sorte que le retourvisuel semble ĂȘtre synchronisĂ© avec l’opĂ©rateur, alors que le robot exĂ©cutait les commandes dansle passĂ©. Pour ce faire, le robot prĂ©dit en permanence les commandes futures en interrogeant unmodĂšle d’apprentissage automatique formĂ© sur les trajectoires passĂ©es et conditionnĂ© aux derniĂšrescommandes reçues

    The classification and new trends of shared control strategies in telerobotic systems: A survey

    Get PDF
    Shared control, which permits a human operator and an autonomous controller to share the control of a telerobotic system, can reduce the operator's workload and/or improve performances during the execution of tasks. Due to the great benefits of combining the human intelligence with the higher power/precision abilities of robots, the shared control architecture occupies a wide spectrum among telerobotic systems. Although various shared control strategies have been proposed, a systematic overview to tease out the relation among different strategies is still absent. This survey, therefore, aims to provide a big picture for existing shared control strategies. To achieve this, we propose a categorization method and classify the shared control strategies into 3 categories: Semi-Autonomous control (SAC), State-Guidance Shared Control (SGSC), and State-Fusion Shared Control (SFSC), according to the different sharing ways between human operators and autonomous controllers. The typical scenarios in using each category are listed and the advantages/disadvantages and open issues of each category are discussed. Then, based on the overview of the existing strategies, new trends in shared control strategies, including the “autonomy from learning” and the “autonomy-levels adaptation,” are summarized and discussed
    • 

    corecore