40 research outputs found

    Supervised Autonomous Locomotion and Manipulation for Disaster Response with a Centaur-like Robot

    Full text link
    Mobile manipulation tasks are one of the key challenges in the field of search and rescue (SAR) robotics requiring robots with flexible locomotion and manipulation abilities. Since the tasks are mostly unknown in advance, the robot has to adapt to a wide variety of terrains and workspaces during a mission. The centaur-like robot Centauro has a hybrid legged-wheeled base and an anthropomorphic upper body to carry out complex tasks in environments too dangerous for humans. Due to its high number of degrees of freedom, controlling the robot with direct teleoperation approaches is challenging and exhausting. Supervised autonomy approaches are promising to increase quality and speed of control while keeping the flexibility to solve unknown tasks. We developed a set of operator assistance functionalities with different levels of autonomy to control the robot for challenging locomotion and manipulation tasks. The integrated system was evaluated in disaster response scenarios and showed promising performance.Comment: In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 201

    Software Architecture and Development for Controlling a Hubo Humanoid Robot

    Get PDF
    Due to their human-like structure, humanoid robots are capable of doing some complex tasks. Since a humanoid robot has a large number of actuators and sensors, controlling it is a difficult task. For various tasks like balancing, driving a car, and interacting with humans, real-time response of the robot is essential. Efficiently controlling a humanoid robot requires a software that guarantees real-time interface and control mechanism so that real-time response of the robot is possible. Addition- ally, to reduce the development effort and time, the software should be open-source, multi-lingual and should have high-level constructs inbuilt in it. Currently Robot Operating System (ROS) and Microsoft Robotics Developer Studio (MRDS) are most commonly used software packages for controlling robots. Since ROS uses Transmission Control Protocol (TCP) for inter-process communication, the latency in communication is high. Therefore, if ROS is used, the robot cannot respond in real-time. On the other hand, MRDS is not an open-source but a proprietary soft- ware package. Therefore it cannot be optimized for a particular robot. Thus, there is an urgent need to develop a real-time, open-source, modular, and thin software for controlling humanoid robots. This thesis describes the design and architecture of two software packages developed to fill this gap. It is expected that in the near future a large number of humanoid robots will be used all around the world. The humanoid robots will be used to perform various tasks. The developed software packages have the potential to be the most commonly used software packages for controlling humanoid robots. These packages will assist humans in controlling and monitoring humanoid robots to perform search-and-rescue operations, explore the universe, assist in household chores, etc

    Scaled Autonomy for Networked Humanoids

    Get PDF
    Humanoid robots have been developed with the intention of aiding in environments designed for humans. As such, the control of humanoid morphology and effectiveness of human robot interaction form the two principal research issues for deploying these robots in the real world. In this thesis work, the issue of humanoid control is coupled with human robot interaction under the framework of scaled autonomy, where the human and robot exchange levels of control depending on the environment and task at hand. This scaled autonomy is approached with control algorithms for reactive stabilization of human commands and planned trajectories that encode semantically meaningful motion preferences in a sequential convex optimization framework. The control and planning algorithms have been extensively tested in the field for robustness and system verification. The RoboCup competition provides a benchmark competition for autonomous agents that are trained with a human supervisor. The kid-sized and adult-sized humanoid robots coordinate over a noisy network in a known environment with adversarial opponents, and the software and routines in this work allowed for five consecutive championships. Furthermore, the motion planning and user interfaces developed in the work have been tested in the noisy network of the DARPA Robotics Challenge (DRC) Trials and Finals in an unknown environment. Overall, the ability to extend simplified locomotion models to aid in semi-autonomous manipulation allows untrained humans to operate complex, high dimensional robots. This represents another step in the path to deploying humanoids in the real world, based on the low dimensional motion abstractions and proven performance in real world tasks like RoboCup and the DRC

    Supervised Telemanipulation Interface for Humanoid Driving

    Get PDF
    This thesis proposes solutions for semi-autonomous driving of an Ackerman-style vehicle by an full-sized humanoid robot. A Robot Operating System based interface is developed to promote humanoid driving. The humanoid robot is also equipped with an on-board vision system which comprises of a 2D LIDAR, an inertial measurement unit and stereo cameras. Based on the visual information from the vision system, the operator speci es the operation to be performed. The operator commands the turning angle for the steering wheel and the robot takes the necessary actions to realize this task. Likewise, pressing or releasing the gas pedal is done based on the operator's request. The operator has the option of visualizing the virtual model of the robot and its work site, which facilitates command and control of the robot. Experiments are conducted on an full-sized humanoid robot DRC-Hubo, to drive a golf cart and a two-passenger utility vehicle. Previous stages of driving such as searching for the vehicle and walking towards it are also brie y discussed

    Generating Humanoid Multi-Contact through Feasibility Visualization

    Full text link
    We present a feasibility-driven teleoperation framework designed to generate humanoid multi-contact maneuvers for use in unstructured environments. Our framework is designed for motions with arbitrary contact modes and postures. The operator configures a pre-execution preview robot through contact points and kinematic tasks. A fast estimation of the preview robot's quasi-static feasibility is performed by checking contact stability and collisions along an interpolated trajectory. A visualization of Center of Mass (CoM) stability margin, based on friction and actuation constraints, is displayed and can be previewed if the operator chooses to add or remove contacts. Contact points can be placed anywhere on a mesh approximation of the robot surface, enabling motions with knee or forearm contacts. We demonstrate our approach in simulation and hardware on a NASA Valkyrie humanoid, focusing on multi-contact trajectories which are challenging to generate autonomously or through alternative teleoperation approaches

    Computer Simulation of Human-Robot Collaboration in the Context of Industry Revolution 4.0

    Get PDF
    The essential role of robot simulation for industrial robots, in particular the collaborative robots is presented in this chapter. We begin by discussing the robot utilization in the industry which includes mobile robots, arm robots, and humanoid robots. The author emphasizes the application of collaborative robots in regard to industry revolution 4.0. Then, we present how the collaborative robot utilization in the industry can be achieved through computer simulation by means of virtual robots in simulated environments. The robot simulation presented here is based on open dynamic engine (ODE) using anyKode Marilou. The author surveys on the use of dynamic simulations in application of collaborative robots toward industry 4.0. Due to the challenging problems which related to humanoid robots for collaborative robots and behavior in human-robot collaboration, the use of robot simulation may open the opportunities in collaborative robotic research in the context of industry 4.0. As developing a real collaborative robot is still expensive and time-consuming, while accessing commercial collaborative robots is relatively limited; thus, the development of robot simulation can be an option for collaborative robotic research and education purposes

    An Object Template Approach to Manipulation for Semi-autonomous Avatar Robots

    Get PDF
    Nowadays, the first steps towards the use of mobile robots to perform manipulation tasks in remote environments have been made possible. This opens new possibilities for research and development, since robots can help humans to perform tasks in many scenarios. A remote robot can be used as avatar in applications such as for medical or industrial use, in rescue and disaster recovery tasks which might be hazardous environments for human beings to enter, as well as for more distant scenarios like planetary explorations. Among the most typical applications in recent years, research towards the deployment of robots to mitigate disaster scenarios has been of great interest in the robotics field. Disaster scenarios present challenges that need to be tackled. Their unstructured nature makes them difficult to predict and even though some assumptions can be made for human-designed scenarios, there is no certainty on the expected conditions. Communications with a robot inside these scenarios might also be challenged; wired communications limit reachability and wireless communications are limited by bandwidth. Despite the great progress in the robotics research field, these difficulties have prevented the current autonomous robotic approaches to perform efficiently in unstructured remote scenarios. On one side, acquiring physical and abstract information from unknown objects in a full autonomous way in uncontrolled environmental conditions is still an unsolved problem. Several challenges have to be overcome such as object recognition, grasp planning, manipulation, and mission planning among others. On the other side, purely teleoperated robots require a reliable communication link robust to reachability, bandwidth, and latency which can provide all the necessary feedback that a human operator needs in order to achieve sufficiently good situational awareness, e.g., worldmodel, robot state, forces, and torques exerted. Processing this amount of information plus the necessary training to perform joint motions with the robot represent a high mental workload for the operator which results in very low execution times. Additionally, a pure teleoperated approach is error-prone given that the success in a manipulation task strongly depends on the ability and expertise of the human operating the robot. Both, autonomous and teleoperated robotic approaches have pros and cons, for this reason a middle ground approach has emerged. In an approach where a human supervises a semi-autonomous remote robot, strengths from both, full autonomous and purely teleoperated approaches can be combined while at the same time their weaknesses can be tackled. A remote manipulation task can be divided into sub-tasks such as planning, perception, action, and evaluation. A proper distribution of these sub-tasks between the human operator and the remote robot can increase the efficiency and potential of success in a manipulation task. On the one hand, a human operator can trivially plan a task (planning), identify objects in the sensor data acquired by the robot (perception), and verify the completion of a task (evaluation). On the other hand, it is challenging to remotely control in joint space a robotic system like a humanoid robot that can easily have over 25 degrees of freedom (DOF). For this reason, in this approach the complex sub-tasks such as motion planning, motion execution, and obstacle avoidance (action) are performed autonomously by the remote robot. With this distribution of tasks, the challenge of converting the operator intent into a robot action arises. This thesis investigates concepts of how to efficiently provide a remote robot with the operator intent in a flexible means of interaction. While current approaches focus on an object-grasp-centered means of interaction, this thesis aims at providing physical and abstract properties of the objects of interest. With this information, the robot can perform autonomous subtasks like locomotion through the environment, grasping objects, and manipulating them at an affordance-level avoiding collisions with the environment in order to efficiently accomplish the manipulation task needed. For this purpose, the concept of Object Template (OT) has been developed in this thesis. An OT is a virtual representation of an object of interest that contains information that a remote robot can use to manipulate such object or other similar objects. The object template concept presented here goes beyond state-of-the-art related concepts by extending the robot capabilities to use affordance information of the object. This concept includes physical information (mass, center of mass, inertia tensor) as well as abstract information (potential grasps, affordances, and usabilities). Because humans are very good at analysing a situation, planning new ways of how to solve a task, even using objects for different purposes, it is important to allow communicating the planning and perception performed by the operator such that the robot can execute the action based on the information contained in the OT. This leverages human intelligence with robot capabilities. For example, as an implementation in a 3D environment, an OT can be visualized as a 3D geometry mesh that simulates an object of interest. A human operator can manipulate the OT and move it so that it overlaps with the visualized sensor data of the real object. Information of the object template type and its pose can be compressed and sent using low bandwidth communication. Then, the remote robot can use the information of the OT to approach, grasp, and manipulate the real object. The use of remote humanoid robots as avatars is expected to be intuitive to operators (or potential human response forces) since the kinematic chains and degrees of freedom are similar to humans. This allows operators to visualize themselves in the remote environment and think how to solve a task, however, task requirements such as special tools might not be found. For this reason, a flexible means of interaction that can account for allowing improvisation from the operator is also needed. In this approach, improvisation is described as "a change of a plan on how to achieve a certain task, depending on the current situation". A human operator can then improvise by adapting the affordances of known objects into new unknown objects. For example, by utilizing the affordances defined in an OT on a new object that has similar physical properties or which manipulation skills belong to the same class. The experimental results presented in this thesis validate the proposed approach by demonstrating the successful achievement of several manipulation tasks using object templates. Systematic laboratory experimentation has been performed to evaluate the individual aspects of this approach. The performance of the approach has been tested in three different humanoid robotic systems (one of these robots belongs to another research laboratory). These three robotic platforms also participated in the renowned international competition DARPA Robotics Challenge (DRC) which between 2012 and 2015 was considered the most ambitious and challenging robotic competition
    corecore