11,491 research outputs found
Real-time Perception meets Reactive Motion Generation
We address the challenging problem of robotic grasping and manipulation in
the presence of uncertainty. This uncertainty is due to noisy sensing,
inaccurate models and hard-to-predict environment dynamics. We quantify the
importance of continuous, real-time perception and its tight integration with
reactive motion generation methods in dynamic manipulation scenarios. We
compare three different systems that are instantiations of the most common
architectures in the field: (i) a traditional sense-plan-act approach that is
still widely used, (ii) a myopic controller that only reacts to local
environment dynamics and (iii) a reactive planner that integrates feedback
control and motion optimization. All architectures rely on the same components
for real-time perception and reactive motion generation to allow a quantitative
evaluation. We extensively evaluate the systems on a real robotic platform in
four scenarios that exhibit either a challenging workspace geometry or a
dynamic environment. In 333 experiments, we quantify the robustness and
accuracy that is due to integrating real-time feedback at different time scales
in a reactive motion generation system. We also report on the lessons learned
for system building
Let's Push Things Forward: A Survey on Robot Pushing
As robot make their way out of factories into human environments, outer
space, and beyond, they require the skill to manipulate their environment in
multifarious, unforeseeable circumstances. With this regard, pushing is an
essential motion primitive that dramatically extends a robot's manipulation
repertoire. In this work, we review the robotic pushing literature. While
focusing on work concerned with predicting the motion of pushed objects, we
also cover relevant applications of pushing for planning and control. Beginning
with analytical approaches, under which we also subsume physics engines, we
then proceed to discuss work on learning models from data. In doing so, we
dedicate a separate section to deep learning approaches which have seen a
recent upsurge in the literature. Concluding remarks and further research
perspectives are given at the end of the paper
Human Visual Understanding for Cognition and Manipulation -- A primer for the roboticist
Robotic research is often built on approaches that are motivated by insights
from self-examination of how we interface with the world. However, given
current theories about human cognition and sensory processing, it is reasonable
to assume that the internal workings of the brain are separate from how we
interface with the world and ourselves. To amend some of these misconceptions
arising from self-examination this article reviews human visual understanding
for cognition and action, specifically manipulation. Our focus is on
identifying overarching principles such as the separation into visual
processing for action and cognition, hierarchical processing of visual input,
and the contextual and anticipatory nature of visual processing for action. We
also provide a rudimentary exposition of previous theories about visual
understanding that shows how self-examination can lead down the wrong path. Our
hope is that the article will provide insights for the robotic researcher that
can help them navigate the path of self-examination, give them an overview of
current theories about human visual processing, as well as provide a source for
further relevant reading.Comment: 17 pages, 8 figure
Human-Robot Collaboration: From Psychology to Social Robotics
With the advances in robotic technology, research in human-robot
collaboration (HRC) has gained in importance. For robots to interact with
humans autonomously they need active decision making that takes human partners
into account. However, state-of-the-art research in HRC does often assume a
leader-follower division, in which one agent leads the interaction. We believe
that this is caused by the lack of a reliable representation of the human and
the environment to allow autonomous decision making. This problem can be
overcome by an embodied approach to HRC which is inspired by psychological
studies of human-human interaction (HHI). In this survey, we review
neuroscientific and psychological findings of the sensorimotor patterns that
govern HHI and view them in a robotics context. Additionally, we study the
advances made by the robotic community into the direction of embodied HRC. We
focus on the mechanisms that are required for active, physical human-robot
collaboration. Finally, we discuss the similarities and differences in the two
fields of study which pinpoint directions of future research
Relaxed-Rigidity Constraints: Kinematic Trajectory Optimization and Collision Avoidance for In-Grasp Manipulation
This paper proposes a novel approach to performing in-grasp manipulation: the
problem of moving an object with reference to the palm from an initial pose to
a goal pose without breaking or making contacts. Our method to perform in-grasp
manipulation uses kinematic trajectory optimization which requires no knowledge
of dynamic properties of the object. We implement our approach on an Allegro
robot hand and perform thorough experiments on 10 objects from the YCB dataset.
However, the proposed method is general enough to generate motions for most
objects the robot can grasp. Experimental result support the feasibillty of its
application across a variety of object shapes. We explore the adaptability of
our approach to additional task requirements by including collision avoidance
and joint space smoothness costs. The grasped object avoids collisions with the
environment by the use of a signed distance cost function. We reduce the
effects of unmodeled object dynamics by requiring smooth joint trajectories. We
additionally compensate for errors encountered during trajectory execution by
formulating an object pose feedback controller.Comment: Accepted draft to Autonomous Robot
Leveraging Contact Forces for Learning to Grasp
Grasping objects under uncertainty remains an open problem in robotics
research. This uncertainty is often due to noisy or partial observations of the
object pose or shape. To enable a robot to react appropriately to unforeseen
effects, it is crucial that it continuously takes sensor feedback into account.
While visual feedback is important for inferring a grasp pose and reaching for
an object, contact feedback offers valuable information during manipulation and
grasp acquisition. In this paper, we use model-free deep reinforcement learning
to synthesize control policies that exploit contact sensing to generate robust
grasping under uncertainty. We demonstrate our approach on a multi-fingered
hand that exhibits more complex finger coordination than the commonly used
two-fingered grippers. We conduct extensive experiments in order to assess the
performance of the learned policies, with and without contact sensing. While it
is possible to learn grasping policies without contact sensing, our results
suggest that contact feedback allows for a significant improvement of grasping
robustness under object pose uncertainty and for objects with a complex shape.Comment: 7 pages, 5 figures, Submitted to ICRA'1
A constrained control-planning strategy for redundant manipulators
This paper presents an interconnected control-planning strategy for redundant
manipulators, subject to system and environmental constraints. The method
incorporates low-level control characteristics and high-level planning
components into a robust strategy for manipulators acting in complex
environments, subject to joint limits. This strategy is formulated using an
adaptive control rule, the estimated dynamic model of the robotic system and
the nullspace of the linearized constraints. A path is generated that takes
into account the capabilities of the platform. The proposed method is
computationally efficient, enabling its implementation on a real multi-body
robotic system. Through experimental results with a 7 DOF manipulator, we
demonstrate the performance of the method in real-world scenarios
Sim-to-Real Transfer of Accurate Grasping with Eye-In-Hand Observations and Continuous Control
In the context of deep learning for robotics, we show effective method of
training a real robot to grasp a tiny sphere (1.37cm of diameter), with an
original combination of system design choices. We decompose the end-to-end
system into a vision module and a closed-loop controller module. The two
modules use target object segmentation as their common interface. The vision
module extracts information from the robot end-effector camera, in the form of
a binary segmentation mask of the target. We train it to achieve effective
domain transfer by composing real background images with simulated images of
the target. The controller module takes as input the binary segmentation mask,
and thus is agnostic to visual discrepancies between simulated and real
environments. We train our closed-loop controller in simulation using imitation
learning and show it is robust with respect to discrepancies between the
dynamic model of the simulated and real robot: when combined with eye-in-hand
observations, we achieve a 90% success rate in grasping a tiny sphere with a
real robot. The controller can generalize to unseen scenarios where the target
is moving and even learns to recover from failures.Comment: Neural Information Processing Systems (NIPS) 2017 Workshop on Acting
and Interacting in the Real World: Challenges in Robot Learnin
Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks
Contact-rich manipulation tasks in unstructured environments often require
both haptic and visual feedback. However, it is non-trivial to manually design
a robot controller that combines modalities with very different
characteristics. While deep reinforcement learning has shown success in
learning control policies for high-dimensional inputs, these algorithms are
generally intractable to deploy on real robots due to sample complexity. We use
self-supervision to learn a compact and multimodal representation of our
sensory inputs, which can then be used to improve the sample efficiency of our
policy learning. We evaluate our method on a peg insertion task, generalizing
over different geometry, configurations, and clearances, while being robust to
external perturbations. Results for simulated and real robot experiments are
presented.Comment: ICRA 201
Revisiting Active Perception
Despite the recent successes in robotics, artificial intelligence and
computer vision, a complete artificial agent necessarily must include active
perception. A multitude of ideas and methods for how to accomplish this have
already appeared in the past, their broader utility perhaps impeded by
insufficient computational power or costly hardware. The history of these
ideas, perhaps selective due to our perspectives, is presented with the goal of
organizing the past literature and highlighting the seminal contributions. We
argue that those contributions are as relevant today as they were decades ago
and, with the state of modern computational tools, are poised to find new life
in the robotic perception systems of the next decade
- …