41 research outputs found
Markerless visual servoing on unknown objects for humanoid robot platforms
To precisely reach for an object with a humanoid robot, it is of central
importance to have good knowledge of both end-effector, object pose and shape.
In this work we propose a framework for markerless visual servoing on unknown
objects, which is divided in four main parts: I) a least-squares minimization
problem is formulated to find the volume of the object graspable by the robot's
hand using its stereo vision; II) a recursive Bayesian filtering technique,
based on Sequential Monte Carlo (SMC) filtering, estimates the 6D pose
(position and orientation) of the robot's end-effector without the use of
markers; III) a nonlinear constrained optimization problem is formulated to
compute the desired graspable pose about the object; IV) an image-based visual
servo control commands the robot's end-effector toward the desired pose. We
demonstrate effectiveness and robustness of our approach with extensive
experiments on the iCub humanoid robot platform, achieving real-time
computation, smooth trajectories and sub-pixel precisions
A Continuous Grasp Representation for the Imitation Learning of Grasps on Humanoid Robots
Models and methods are presented which enable a humanoid robot to learn reusable, adaptive grasping skills. Mechanisms and principles in human grasp behavior are studied. The findings are used to develop a grasp representation capable of retaining specific motion characteristics and of adapting to different objects and tasks. Based on the representation a framework is proposed which enables the robot to observe human grasping, learn grasp representations, and infer executable grasping actions
Articulated Object Tracking from Visual Sensory Data for Robotic Manipulation
Roboti juhtimine liigestatud objekti manipuleerimisel vajab robustset ja täpsetobjekti oleku hindamist. Oleku hindamise tulemust kasutatakse tagasisidena vastavate roboti liigutuste arvutamisel soovitud manipulatsiooni tulemuse saavutamiseks. Selles töös uuritakse robootilise manipuleerimise visuaalse tagasiside teostamist. Tehisnägemisele põhinevat servode liigutamist juhitakse ruutplaneerimise raamistikus võimaldamaks humanoidsel robotil läbi viia objekti manipulatsiooni. Esitletakse tehisnägemisel põhinevat liigestatud objekti oleku hindamise meetodit. Me näitame väljapakutud meetodi efektiivsust mitmel erineval eksperimendil HRP-4 humanoidse robotiga. Teeme ka ettepaneku ühendada masinõppe ja serva tuvastamise tehnikad liigestatud objekti manipuleerimise markeerimata visuaalse tagasiside teostamiseks reaalajas.In order for a robot to manipulate an articulated object, it needs to know itsstate (i.e. its pose); that is to say: where and in which configuration it is. Theresult of the object’s state estimation is to be provided as a feedback to the control to compute appropriate robot motion and achieve the desired manipulation outcome. This is the main topic of this thesis, where articulated object state estimation is solved using visual feedback. Vision based servoing is implemented in a Quadratic Programming task space control framework to enable humanoid robot to perform articulated objects manipulation. We thoroughly developed our methodology for vision based articulated object state estimation on these bases.We demonstrate its efficiency by assessing it on several real experiments involving the HRP-4 humanoid robot. We also propose to combine machine learning and edge extraction techniques to achieve markerless, realtime and robust visual feedback for articulated object manipulation
Towards Advanced Robotic Manipulations for Nuclear Decommissioning
Despite enormous remote handling requirements, remarkably very few robots are being used by the nuclear industry. Most of the remote handling tasks are still performed manually, using conventional mechanical master‐slave devices. The few robotic manipulators deployed are directly tele‐operated in rudimentary ways, with almost no autonomy or even a pre‐programmed motion. In addition, majority of these robots are under‐sensored (i.e. with no proprioception), which prevents them to use for automatic tasks. In this context, primarily this chapter discusses the human operator performance in accomplishing heavy‐duty remote handling tasks in hazardous environments such as nuclear decommissioning. Multiple factors are evaluated to analyse the human operators’ performance and workload. Also, direct human tele‐operation is compared against human‐supervised semi‐autonomous control exploiting computer vision. Secondarily, a vision‐guided solution towards enabling advanced control and automating the under‐sensored robots is presented. Maintaining the coherence with real nuclear scenario, the experiments are conducted in the lab environment and results are discussed
A framework for compliant physical interaction : the grasp meets the task
Although the grasp-task interplay in our daily life is unquestionable, very little research has addressed this problem in robotics. In order to fill the gap between the grasp and the task, we adopt the most successful approaches to grasp and task specification, and extend them with additional elements that allow to define a grasp-task link. We propose a global sensor-based framework for the specification and robust control of physical interaction tasks, where the grasp and the task are jointly considered on the basis of the task frame formalism and the knowledge-based approach to grasping. A physical interaction task planner is also presented, based on the new concept of task-oriented hand pre-shapes. The planner focuses on manipulation of articulated parts in home environments, and is able to specify automatically all the elements of a physical interaction task required by the proposed framework. Finally, several applications are described, showing the versatility of the proposed approach, and its suitability for the fast implementation of robust physical interaction tasks in very different robotic systems
Vision-based methods for state estimation and control of robotic systems with application to mobile and surgical robots
For autonomous systems that need to perceive the surrounding environment for the accomplishment of a given task, vision is a highly informative exteroceptive sensory source. When gathering information from the available sensors, in fact, the richness of visual data allows to provide a complete description of the environment, collecting geometrical and semantic information (e.g., object pose, distances, shapes, colors, lights). The huge amount of collected data allows to consider both methods exploiting the totality of the data (dense approaches), or a reduced set obtained from feature extraction procedures (sparse approaches). This manuscript presents dense and sparse vision-based methods for control and sensing of robotic systems. First, a safe navigation scheme for mobile robots, moving in unknown environments populated by obstacles, is presented. For this task, dense visual information is used to perceive the environment (i.e., detect ground plane and obstacles) and, in combination with other sensory sources, provide an estimation of the robot motion with a linear observer. On the other hand, sparse visual data are extrapolated in terms of geometric primitives, in order to implement a visual servoing control scheme satisfying proper navigation behaviours. This controller relies on visual estimated information and is designed in order to guarantee safety during navigation. In addition, redundant structures are taken into account to re-arrange the internal configuration of the robot and reduce its encumbrance when the workspace is highly cluttered.
Vision-based estimation methods are relevant also in other contexts. In the field of surgical robotics, having reliable data about unmeasurable quantities is of great importance and critical at the same time. In this manuscript, we present a Kalman-based observer to estimate the 3D pose of a suturing needle held by a surgical manipulator for robot-assisted suturing. The method exploits images acquired by the endoscope of the robot platform to extrapolate relevant geometrical information and get projected measurements of the tool pose. This method has also been validated with a novel simulator designed for the da Vinci robotic platform, with the purpose to ease interfacing and employment in ideal conditions for testing and validation.
The Kalman-based observers mentioned above are classical passive estimators, whose system inputs used to produce the proper estimation are theoretically arbitrary. This does not provide any possibility to actively adapt input trajectories in order to optimize specific requirements on the performance of the estimation. For this purpose, active estimation paradigm is introduced and some related strategies are presented.
More specifically, a novel active sensing algorithm employing visual dense information is described for a typical Structure-from-Motion (SfM) problem.
The algorithm generates an optimal estimation of a scene observed by a moving camera, while minimizing the maximum uncertainty of the estimation.
This approach can be applied to any robotic platforms and has been validated with a manipulator arm equipped with a monocular camera
Exploitation of time-of-flight (ToF) cameras
This technical report reviews the state-of-the art in the field of ToF cameras, their advantages, their limitations, and their present-day applications sometimes in combination with other sensors. Even though ToF cameras provide neither higher resolution nor larger ambiguity-free range compared to other range map estimation systems, advantages such as registered depth and intensity data at a high frame rate, compact design, low weight and reduced power consumption have motivated their use in numerous areas of research. In robotics, these areas range from mobile robot navigation and map building to vision-based human motion capture and gesture recognition, showing particularly a great potential in object modeling and recognition.Preprin
Vision-based trajectory control of unsensored robots to increase functionality, without robot hardware modication
In nuclear decommissioning operations, very rugged remote manipulators are used, which lack proprioceptive joint angle sensors. Hence these machines are simply tele-operated, where a human operator controls each joint of the robot individually using a teach pendant or a set of switches. Moreover, decommissioning tasks often involve forceful interactions between the environment and powerful tools at the robot's end-effector. Such interactions can result in complex dynamics, large torques at the robot's joints, and can also lead to erratic movements of a mobile manipulator's base frame with respect to the task space. This Thesis seeks to address these problems by, firstly, showing how
the configuration of such robots can be tracked in real-time by a vision system and fed back into a trajectory control scheme. Secondly, the Thesis investigates the dynamics of robot-environment contacts, and proposes several control schemes for detecting, coping
with, and also exploiting such contacts. Several contributions are advanced in this Thesis. Specifically a control framework is presented which exploits the constraints arising at contact points to effectively reduce commanded torques to perform tasks; methods are advanced to estimate the constraints arising from contacts in a number of situations, using only kinematic quantities; a framework is proposed to estimate the
configuration of a manipulator using a single monocular camera; and finally, a general control framework is described which uses all of the above contributions to servo a manipulator. The results of a number of experiments are presented which demonstrate the feasibility of the proposed methods
Sense, Think, Grasp: A study on visual and tactile information processing for autonomous manipulation
Interacting with the environment using hands is one of the distinctive
abilities of humans with respect to other species. This aptitude reflects on
the crucial role played by objects\u2019 manipulation in the world that we have
shaped for us. With a view of bringing robots outside industries for supporting
people during everyday life, the ability of manipulating objects
autonomously and in unstructured environments is therefore one of the basic
skills they need. Autonomous manipulation is characterized by great
complexity especially regarding the processing of sensors information to
perceive the surrounding environment. Humans rely on vision for wideranging
tridimensional information, prioprioception for the awareness of
the relative position of their own body in the space and the sense of touch
for local information when physical interaction with objects happens. The
study of autonomous manipulation in robotics aims at transferring similar
perceptive skills to robots so that, combined with state of the art control
techniques, they could be able to achieve similar performance in manipulating
objects. The great complexity of this task makes autonomous
manipulation one of the open problems in robotics that has been drawing
increasingly the research attention in the latest years.
In this work of Thesis, we propose possible solutions to some key components
of autonomous manipulation, focusing in particular on the perception
problem and testing the developed approaches on the humanoid robotic platform iCub. When available, vision is the first source of information
to be processed for inferring how to interact with objects. The object
modeling and grasping pipeline based on superquadric functions we designed
meets this need, since it reconstructs the object 3D model from partial
point cloud and computes a suitable hand pose for grasping the object.
Retrieving objects information with touch sensors only is a relevant skill
that becomes crucial when vision is occluded, as happens for instance during
physical interaction with the object. We addressed this problem with
the design of a novel tactile localization algorithm, named Memory Unscented
Particle Filter, capable of localizing and recognizing objects relying solely
on 3D contact points collected on the object surface. Another key point of
autonomous manipulation we report on in this Thesis work is bi-manual
coordination. The execution of more advanced manipulation tasks in fact
might require the use and coordination of two arms. Tool usage for instance
often requires a proper in-hand object pose that can be obtained via
dual-arm re-grasping. In pick-and-place tasks sometimes the initial and
target position of the object do not belong to the same arm workspace, then
requiring to use one hand for lifting the object and the other for locating it
in the new position. At this regard, we implemented a pipeline for executing
the handover task, i.e. the sequences of actions for autonomously passing an
object from one robot hand on to the other.
The contributions described thus far address specific subproblems of
the more complex task of autonomous manipulation. This actually differs
from what humans do, in that humans develop their manipulation
skills by learning through experience and trial-and-error strategy. Aproper
mathematical formulation for encoding this learning approach is given by
Deep Reinforcement Learning, that has recently proved to be successful in
many robotics applications. For this reason, in this Thesis we report also
on the six month experience carried out at Berkeley Artificial Intelligence
Research laboratory with the goal of studying Deep Reinforcement Learning
and its application to autonomous manipulation