18,271 research outputs found
Robotic execution for everyday tasks by means of external vision/force control
In this article, we present an integrated manipulation framework for a service robot, that allows to
interact with articulated objects at home environments
through the coupling of vision and force modalities. We
consider a robot which is observing simultaneously his
hand and the object to manipulate, by using an external
camera (i.e. robot head). Task-oriented grasping algorithms [1] are used in order to plan a suitable grasp on the object according to the task to perform. A new vision/force coupling approach [2], based on external control, is used in order to, first, guide the robot hand towards the grasp position and, second, perform the task taking into account external forces. The coupling between these two complementary sensor modalities provides the robot with robustness against uncertainties in
models and positioning. A position-based visual servoing
control law has been designed in order to continuously
align the robot hand with respect to the object that is being manipulated, independently of camera position. This allows to freely move the camera while the task is being
executed and makes this approach amenable to be integrated in current humanoid robots without the need of hand-eye calibration. Experimental results on a real
robot interacting with different kind of doors are pre-
sented
Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation
We develop an approach that benefits from large simulated datasets and takes
full advantage of the limited online data that is most relevant. We propose a
variant of Bayesian optimization that alternates between using informed and
uninformed kernels. With this Bernoulli Alternation Kernel we ensure that
discrepancies between simulation and reality do not hinder adapting robot
control policies online. The proposed approach is applied to a challenging
real-world problem of task-oriented grasping with novel objects. Our further
contribution is a neural network architecture and training pipeline that use
experience from grasping objects in simulation to learn grasp stability scores.
We learn task scores from a labeled dataset with a convolutional network, which
is used to construct an informed kernel for our variant of Bayesian
optimization. Experiments on an ABB Yumi robot with real sensor data
demonstrate success of our approach, despite the challenge of fulfilling task
requirements and high uncertainty over physical properties of objects.Comment: To appear in 2nd Conference on Robot Learning (CoRL) 201
Multi-Modal Human-Machine Communication for Instructing Robot Grasping Tasks
A major challenge for the realization of intelligent robots is to supply them
with cognitive abilities in order to allow ordinary users to program them
easily and intuitively. One way of such programming is teaching work tasks by
interactive demonstration. To make this effective and convenient for the user,
the machine must be capable to establish a common focus of attention and be
able to use and integrate spoken instructions, visual perceptions, and
non-verbal clues like gestural commands. We report progress in building a
hybrid architecture that combines statistical methods, neural networks, and
finite state machines into an integrated system for instructing grasping tasks
by man-machine interaction. The system combines the GRAVIS-robot for visual
attention and gestural instruction with an intelligent interface for speech
recognition and linguistic interpretation, and an modality fusion module to
allow multi-modal task-oriented man-machine communication with respect to
dextrous robot manipulation of objects.Comment: 7 pages, 8 figure
Autonomy Infused Teleoperation with Application to BCI Manipulation
Robot teleoperation systems face a common set of challenges including
latency, low-dimensional user commands, and asymmetric control inputs. User
control with Brain-Computer Interfaces (BCIs) exacerbates these problems
through especially noisy and erratic low-dimensional motion commands due to the
difficulty in decoding neural activity. We introduce a general framework to
address these challenges through a combination of computer vision, user intent
inference, and arbitration between the human input and autonomous control
schemes. Adjustable levels of assistance allow the system to balance the
operator's capabilities and feelings of comfort and control while compensating
for a task's difficulty. We present experimental results demonstrating
significant performance improvement using the shared-control assistance
framework on adapted rehabilitation benchmarks with two subjects implanted with
intracortical brain-computer interfaces controlling a seven degree-of-freedom
robotic manipulator as a prosthetic. Our results further indicate that shared
assistance mitigates perceived user difficulty and even enables successful
performance on previously infeasible tasks. We showcase the extensibility of
our architecture with applications to quality-of-life tasks such as opening a
door, pouring liquids from containers, and manipulation with novel objects in
densely cluttered environments
- …