1,151 research outputs found

    Learning reactive robot behavior for autonomous valve turning

    No full text

    Probabilistic Hybrid Action Models for Predicting Concurrent Percept-driven Robot Behavior

    Full text link
    This article develops Probabilistic Hybrid Action Models (PHAMs), a realistic causal model for predicting the behavior generated by modern percept-driven robot plans. PHAMs represent aspects of robot behavior that cannot be represented by most action models used in AI planning: the temporal structure of continuous control processes, their non-deterministic effects, several modes of their interferences, and the achievement of triggering conditions in closed-loop robot plans. The main contributions of this article are: (1) PHAMs, a model of concurrent percept-driven behavior, its formalization, and proofs that the model generates probably, qualitatively accurate predictions; and (2) a resource-efficient inference method for PHAMs based on sampling projections from probabilistic action models and state descriptions. We show how PHAMs can be applied to planning the course of action of an autonomous robot office courier based on analytical and experimental results

    A modular approach for remote operation of humanoid robots in search and rescue scenarios

    Get PDF
    In the present work we have designed and implemented a modular, robust and user-friendly Pilot Interface meant to control humanoid robots in rescue scenarios during dangerous missions. We follow the common approach where the robot is semi-autonomous and it is remotely controlled by a human operator. In our implementation, YARP is used both as a communication channel for low-level hardware components and as an interconnecting framework between control modules. The interface features the capability to receive the status of these modules continuously and request actions when required. In addition, ROS is used to retrieve data from different types of sensors and to display relevant information of the robot status such as joint positions, velocities and torques, force/torque measurements and inertial data. Furthermore the operator is immersed into a 3D reconstruction of the environment and is enabled to manipulate 3D virtual objects. The Pilot Interface allows the operator to control the robot at three different levels. The high-level control deals with human-like actions which involve the whole robot’s actuation and perception. For instance, we successfully teleoperated IIT’s COmpliant huMANoid (COMAN) platform to execute complex navigation tasks through the composition of elementary walking commands (e.g.[walk_forward, 1m]). The mid-level control generates tasks in cartesian space, based on the position and orientation of objects of interest (i.e. valve, door handle) w.r.t. a reference frame on the robot. The low level control operates in joint space and is meant as a last resort tool to perform fine adjustments (e.g. release a trapped limb). Finally, our Pilot Interface is adaptable to different tasks, strategies and pilot’s needs, thanks to a modular architecture of the system which enables to add/remove single front-end components (e.g. GUI widgets) as well as back-end control modules on the fly

    Versatile Multi-Contact Planning and Control for Legged Loco-Manipulation

    Full text link
    Loco-manipulation planning skills are pivotal for expanding the utility of robots in everyday environments. These skills can be assessed based on a system's ability to coordinate complex holistic movements and multiple contact interactions when solving different tasks. However, existing approaches have been merely able to shape such behaviors with hand-crafted state machines, densely engineered rewards, or pre-recorded expert demonstrations. Here, we propose a minimally-guided framework that automatically discovers whole-body trajectories jointly with contact schedules for solving general loco-manipulation tasks in pre-modeled environments. The key insight is that multi-modal problems of this nature can be formulated and treated within the context of integrated Task and Motion Planning (TAMP). An effective bilevel search strategy is achieved by incorporating domain-specific rules and adequately combining the strengths of different planning techniques: trajectory optimization and informed graph search coupled with sampling-based planning. We showcase emergent behaviors for a quadrupedal mobile manipulator exploiting both prehensile and non-prehensile interactions to perform real-world tasks such as opening/closing heavy dishwashers and traversing spring-loaded doors. These behaviors are also deployed on the real system using a two-layer whole-body tracking controller

    Scaled Autonomy for Networked Humanoids

    Get PDF
    Humanoid robots have been developed with the intention of aiding in environments designed for humans. As such, the control of humanoid morphology and effectiveness of human robot interaction form the two principal research issues for deploying these robots in the real world. In this thesis work, the issue of humanoid control is coupled with human robot interaction under the framework of scaled autonomy, where the human and robot exchange levels of control depending on the environment and task at hand. This scaled autonomy is approached with control algorithms for reactive stabilization of human commands and planned trajectories that encode semantically meaningful motion preferences in a sequential convex optimization framework. The control and planning algorithms have been extensively tested in the field for robustness and system verification. The RoboCup competition provides a benchmark competition for autonomous agents that are trained with a human supervisor. The kid-sized and adult-sized humanoid robots coordinate over a noisy network in a known environment with adversarial opponents, and the software and routines in this work allowed for five consecutive championships. Furthermore, the motion planning and user interfaces developed in the work have been tested in the noisy network of the DARPA Robotics Challenge (DRC) Trials and Finals in an unknown environment. Overall, the ability to extend simplified locomotion models to aid in semi-autonomous manipulation allows untrained humans to operate complex, high dimensional robots. This represents another step in the path to deploying humanoids in the real world, based on the low dimensional motion abstractions and proven performance in real world tasks like RoboCup and the DRC

    The AEROARMS Project: Aerial Robots with Advanced Manipulation Capabilities for Inspection and Maintenance

    Get PDF
    This article summarizes new aerial robotic manipulation technologies and methods—aerial robotic manipulators with dual arms and multidirectional thrusters—developed in the AEROARMS project for outdoor industrial inspection and maintenance (I&M). Our report deals with the control systems, including the control of the interaction forces and the compliance the teleoperation, which uses passivity to tackle the tradeoff between stability and performance the perception methods for localization, mapping, and inspection the planning methods, including a new control-aware approach for aerial manipulation. Finally, we describe a novel industrial platform with multidirectional thrusters and a new arm design to increase the robustness in industrial contact inspections. In addition, the lessons learned in applying the platform to outdoor aerial manipulation for I&M are pointed out

    Learning to Open Doors with an Aerial Manipulator

    Full text link
    The field of aerial manipulation has seen rapid advances, transitioning from push-and-slide tasks to interaction with articulated objects. So far, when more complex actions are performed, the motion trajectory is usually handcrafted or a result of online optimization methods like Model Predictive Control (MPC) or Model Predictive Path Integral (MPPI) control. However, these methods rely on heuristics or model simplifications to efficiently run on onboard hardware, producing results in acceptable amounts of time. Moreover, they can be sensitive to disturbances and differences between the real environment and its simulated counterpart. In this work, we propose a Reinforcement Learning (RL) approach to learn motion behaviors for a manipulation task while producing policies that are robust to disturbances and modeling errors. Specifically, we train a policy to perform a door-opening task with an Omnidirectional Micro Aerial Vehicle (OMAV). The policy is trained in a physics simulator and experiments are presented both in simulation and running onboard the real platform, investigating the simulation to real world transfer. We compare our method against a state-of-the-art MPPI solution, showing a considerable increase in robustness and speed

    Autonomous subsea intervention (SEAVENTION)

    Get PDF
    This paper presents the main results and latest developments in a 4-year project called autonomous subsea intervention (SEAVENTION). In the project we have developed new methods for autonomous inspection, maintenance and repair (IMR) in subsea oil and gas operations with Unmanned Underwater Vehicles (UUVs). The results are also relevant for offshore wind, aquaculture and other industries. We discuss the trends and status for UUV-based IMR in the oil and gas industry and provide an overview of the state of the art in intervention with UUVs. We also present a 3-level taxonomy for UUV autonomy: mission-level, task-level and vehicle-level. To achieve robust 6D underwater pose estimation of objects for UUV intervention, we have developed marker-less approaches with input from 2D and 3D cameras, as well as marker-based approaches with associated uncertainty. We have carried out experiments with varying turbidity to evaluate full 6D pose estimates in challenging conditions. We have also devised a sensor autocalibration method for UUV localization. For intervention, we have developed methods for autonomous underwater grasping and a novel vision-based distance estimator. For high-level task planning, we have evaluated two frameworks for automated planning and acting (AI planning). We have implemented AI planning for subsea inspection scenarios which have been analyzed and formulated in collaboration with the industry partners. One of the frameworks, called T-REX demonstrates a reactive behavior to the dynamic and potentially uncertain nature of subsea operations. We have also presented an architecture for comparing and choosing between mission plans when new mission goals are introduced.publishedVersio
    • …
    corecore