13 research outputs found
Motion primitive based random planning for loco-manipulation tasks
Several advanced control laws are available for complex robotic systems such as humanoid robots and mobile manipulators. Controls are usually developed for locomotion or for manipulation purposes. Resulting motions are usually executed sequentially and the potentiality of the robotic platform is not fully exploited. In this work we consider the problem of loco-manipulation planning for a robot with given parametrized control laws known as primitives. Such primitives, may have not been designed to be executed simultaneously and by composing them instability may easily arise. With the proposed approach, primitives combination that guarantee stability of the system are obtained resulting in complex whole-body behavior. A formal definition of motion primitives is provided and a random sampling approach on a manifold with limited dimension is investigated. Probabilistic completeness and asymptotic optimality are also proved. The proposed approach is tested both on a mobile manipulator and on the humanoid robot Walk-Man, performing loco-manipulation tasks
Manipulation Framework for Compliant Humanoid COMAN: Application to a Valve Turning Task
With the purpose of achieving a desired interaction performance for our compliant humanoid robot (COMAN), in this paper we propose a semi-autonomous control framework and evaluate it experimentally in a valve turning setup. The control structure consists of various modules and interfaces to identify the valve, locate the robot in front of it and perform the manipulation. The manipulation module implements four motion primitives (Reach, Grasp, Rotate and Disengage) and realizes the corresponding desired impedance profile for each phase to accomplish the task. In this direction, to establish a stable and compliant contact between the valve and the robot hands, while being able to generate the sufficient rotational torques depending on the valve's friction, Rotate incorporates a novel dual-arm impedance control technique to plan and realize a task-appropriate impedance profile. Results of the implementation of the proposed control framework are firstly evaluated in simulation studies using Gazebo. Subsequent experimental results highlight the efficiency of the proposed impedance planning and control in generation of the required interaction forces to accomplish the task
A modular approach for remote operation of humanoid robots in search and rescue scenarios
In the present work we have designed and implemented a modular, robust and user-friendly Pilot Interface
meant to control humanoid robots in rescue scenarios during dangerous missions. We follow the common
approach where the robot is semi-autonomous and it is remotely controlled by a human operator. In our
implementation, YARP is used both as a communication channel for low-level hardware components and as an
interconnecting framework between control modules. The interface features the capability to receive the status
of these modules continuously and request actions when required. In addition, ROS is used to retrieve data
from different types of sensors and to display relevant information of the robot status such as joint positions,
velocities and torques, force/torque measurements and inertial data. Furthermore the operator is immersed into
a 3D reconstruction of the environment and is enabled to manipulate 3D virtual objects. The Pilot Interface
allows the operator to control the robot at three different levels. The high-level control deals with human-like
actions which involve the whole robot’s actuation and perception. For instance, we successfully teleoperated
IIT’s COmpliant huMANoid (COMAN) platform to execute complex navigation tasks through the composition
of elementary walking commands (e.g.[walk_forward, 1m]). The mid-level control generates tasks in cartesian
space, based on the position and orientation of objects of interest (i.e. valve, door handle) w.r.t. a reference
frame on the robot. The low level control operates in joint space and is meant as a last resort tool to perform fine
adjustments (e.g. release a trapped limb). Finally, our Pilot Interface is adaptable to different tasks, strategies
and pilot’s needs, thanks to a modular architecture of the system which enables to add/remove single front-end
components (e.g. GUI widgets) as well as back-end control modules on the fly
Yarp Based Plugins for Gazebo Simulator
This paper presents a set of plugins for the Gazebo simulator that enables the interoperability between a
robot, controlled using the YARP framework, and Gazebo itself. Gazebo is an open-source simulator that can
handle different Dynamic Engines (ODE, DART, Bullet, SimBody), backed up by the Open Source Robotics
Foundation (OSRF) and supported by a very large community. Since our plugins conform with the YARP layer
used on the real robot, applications written for our robots, COMAN and iCub, can be run on the simulator
with no changes. Our plugins have two main components: a YARP interface with the same API as the real
robot interface, and a Gazebo plugin which handles simulated joints, encoders, IMUs, force/torque sensors and
synchronization. The robot model is provided to the simulator by means of an SDF file, which describes all
the geometric, dynamic and visual characteristics of a robot. Once the SDF is read from Gazebo, our plugins
are loaded and associated with the simulated robot model and the simulated world. Different modules for
COMAN and iCub have been developed using Gazebo and our plugins as a testbed: joint impedance control plus
gravity compensation, dual arm Cartesian control for manipulation tasks, dynamic walking, etc. This work has
been developed as part of a joint effort between three different European Projects “WALKMAN”, “CoDyCo”
and “SoftHands” aiming at implementing a common simulation platform to develop and test algorithms for
our robotic platforms. This work is available as open-source to all the researchers in the YARP community
(https://github.com/robotology/gazebo_yarp_plugins)
Distributed Planning for Legged and Mobile Robots From single footsteps to distributed coordination with a time expanded approach
This thesis focuses on a generic distributed coordination framework for heterogeneous robots in two type of scenarios: smart factories and search and rescue situations.
First, it will be described of the software architecture developed for the Walk-Man robot and used during the Darpa Robotics Challenge. One part of the software used is a novel footstep planner capable of anytime planning and providing statically stable footsteps even in case of rough and irregular terrain.
The footstep planner is capable of receiving a desired position or a direction of movements, and allows to command a legged robot with the same controls of a wheeled robot.
Such controls are provided by a novel distributed coordination planner that uses a time-space discretization of the environment, and computes deadlock and collision free trajectories for all the robots in communication range.
Finally, the distributed planner is extended by using a random sampler for computing the shortest path in time--space without the limits of the discretization approach, i.e. memory and computation costs.
The random sampler planner is also tested with ground and flying robots in the same experiments, by using both 2D and 3D state spaces
Multi-agent collaborative protocol on Time Expanded Networks: a new approach to mutual graph resource allocation for traffic management
In this thesis a new approach for traffic management in structured environments with collaborative agents is proposed.
The approach is based on a graph modelization of the environment which takes into account time in addition to space, so that agents can make accurate previsions on their future routes in order to lower the use of collision avoidance policies.
To show the results of the routing algorithm a new simulator is created.
The simulator aims to be a generic framework for the implementation and testing of multi-agent collaborative algorithms
A time expanded network based algorithm for safe and efficient distributed multi-agent coordination
On the Problem of Moving Objects With Autonomous Robots: A Unifying High-Level Planning Approach
Moving objects with autonomous robots is a wide topic that includes single-arm pick-and-place tasks, object regrasping, object passing between two or more arms in the air or using support surfaces such as tables and similar. Each task has been extensively studied and many planning solutions are already present in the literature. In this letter, we present a planning scheme which, based on the use of pre-defined elementary manipulation skills, aims to unify solutions which are usually obtained by means of different planning strategies rooted on hard-coded behaviors. Both robotic manipulators and environment fixed support surfaces are treated as end-effectors of movable and non-movable types, respectively. The task of the robot can thus be broken down into elementary building blocks, which are end-effector manipulation skills, that are then planned at the kinematic level. Feasibility is ensured by propagating unforeseen low-level failures at the higher level and by synthesizing different behaviors. The validity of the proposed solution is shown via experiments on a bimanual robot setup and in simulations involving a more complex setup similar to an assembly line
Multi-object handling for robotic manufacturing
The purpose of this work is to move a step toward the automation of industrial plants through full exploitation of autonomous robots. A planning algorithm is proposed to move different objects in desired configurations with heterogeneous robots such as manipulators, mobile robots and conveyor belts. The proposed approach allows different objects to be handled by different robots simultaneously in an efficient way and avoiding collisions with the environment and self-collisions between robots. In particular, the integrated system will be capable of planning paths for a set of objects from various starting points in the environment (e.g. shelves) to their respective final destinations. The proposed approach unifies the active (e.g., grasping by a hand) and passive (e.g., holding by a table) steps involved in moving the objects in the environment by treating them as end-effectors with constraints and capabilities. Time varying graphs will be introduced to model the problem for simultaneous handling of objects by different end-effectors. Optimal exploration of such graphs will be used to determine paths for each object with time constraints. Results will be validated through simulations