1,516 research outputs found
Unscented Bayesian Optimization for Safe Robot Grasping
We address the robot grasp optimization problem of unknown objects
considering uncertainty in the input space. Grasping unknown objects can be
achieved by using a trial and error exploration strategy. Bayesian optimization
is a sample efficient optimization algorithm that is especially suitable for
this setups as it actively reduces the number of trials for learning about the
function to optimize. In fact, this active object exploration is the same
strategy that infants do to learn optimal grasps. One problem that arises while
learning grasping policies is that some configurations of grasp parameters may
be very sensitive to error in the relative pose between the object and robot
end-effector. We call these configurations unsafe because small errors during
grasp execution may turn good grasps into bad grasps. Therefore, to reduce the
risk of grasp failure, grasps should be planned in safe areas. We propose a new
algorithm, Unscented Bayesian optimization that is able to perform sample
efficient optimization while taking into consideration input noise to find safe
optima. The contribution of Unscented Bayesian optimization is twofold as if
provides a new decision process that drives exploration to safe regions and a
new selection procedure that chooses the optimal in terms of its safety without
extra analysis or computational cost. Both contributions are rooted on the
strong theory behind the unscented transformation, a popular nonlinear
approximation method. We show its advantages with respect to the classical
Bayesian optimization both in synthetic problems and in realistic robot grasp
simulations. The results highlights that our method achieves optimal and robust
grasping policies after few trials while the selected grasps remain in safe
regions.Comment: conference pape
The Stability of Heavy Objects with Multiple Contacts
In both robot grasping and robot locomotion, we wish to hold objects stably in the presence of gravity. We present a derivation of second-order stability conditions for a supported heavy object, employing the tool of Stratified Morse theory. We then apply these general results to the case of objects in the plane
Multi-Modal Human-Machine Communication for Instructing Robot Grasping Tasks
A major challenge for the realization of intelligent robots is to supply them
with cognitive abilities in order to allow ordinary users to program them
easily and intuitively. One way of such programming is teaching work tasks by
interactive demonstration. To make this effective and convenient for the user,
the machine must be capable to establish a common focus of attention and be
able to use and integrate spoken instructions, visual perceptions, and
non-verbal clues like gestural commands. We report progress in building a
hybrid architecture that combines statistical methods, neural networks, and
finite state machines into an integrated system for instructing grasping tasks
by man-machine interaction. The system combines the GRAVIS-robot for visual
attention and gestural instruction with an intelligent interface for speech
recognition and linguistic interpretation, and an modality fusion module to
allow multi-modal task-oriented man-machine communication with respect to
dextrous robot manipulation of objects.Comment: 7 pages, 8 figure
Structure learning of graphical models for task-oriented robot grasping
In the collective imaginaries a robot is a human like machine as any androids in science fiction. However the type of robots that you will encounter most frequently are machinery that do work that is too dangerous, boring or onerous. Most of the robots in the world are of this type. They can be found in auto, medical, manufacturing and space industries.
Therefore a robot is a system that contains sensors, control systems, manipulators, power supplies and software all working together to perform a task.
The development and use of such a system is an active area of research and one of the main problems is the development of interaction skills with the surrounding environment, which include the ability to grasp objects. To perform this task the robot needs to sense the environment and acquire the object informations, physical attributes that may influence a grasp. Humans can solve this grasping problem easily due to their past experiences, that is why many researchers are approaching it from a machine learning perspective finding grasp of an object using information of already known objects. But humans can select the best grasp amongst a vast repertoire not only considering the physical attributes of the object to grasp but even to obtain a certain effect.
This is why in our case the study in the area of robot manipulation is focused on grasping and integrating symbolic tasks with data gained through sensors.
The learning model is based on Bayesian Network to encode the statistical dependencies between the data collected by the sensors and the symbolic task. This data representation has several advantages. It allows to take into account the uncertainty of the real world, allowing to deal with sensor noise, encodes notion of causality and provides an unified network for learning.
Since the network is actually implemented and based on the human expert knowledge, it is very interesting to implement an automated method to learn the structure as in the future more tasks and object features can be introduced and a complex network design based only on human expert knowledge can become unreliable.
Since structure learning algorithms presents some weaknesses, the goal of this thesis is to analyze real data used in the network modeled by the human expert, implement a feasible structure learning approach and compare the results with the network designed by the expert in order to possibly enhance it
CoGrasp: 6-DoF Grasp Generation for Human-Robot Collaboration
Robot grasping is an actively studied area in robotics, mainly focusing on
the quality of generated grasps for object manipulation. However, despite
advancements, these methods do not consider the human-robot collaboration
settings where robots and humans will have to grasp the same objects
concurrently. Therefore, generating robot grasps compatible with human
preferences of simultaneously holding an object becomes necessary to ensure a
safe and natural collaboration experience. In this paper, we propose a novel,
deep neural network-based method called CoGrasp that generates human-aware
robot grasps by contextualizing human preference models of object grasping into
the robot grasp selection process. We validate our approach against existing
state-of-the-art robot grasping methods through simulated and real-robot
experiments and user studies. In real robot experiments, our method achieves
about 88\% success rate in producing stable grasps that also allow humans to
interact and grasp objects simultaneously in a socially compliant manner.
Furthermore, our user study with 10 independent participants indicated our
approach enables a safe, natural, and socially-aware human-robot objects'
co-grasping experience compared to a standard robot grasping technique
- …