2,073 research outputs found
Dynamics, control and sensor issues pertinent to robotic hands for the EVA retriever system
Basic dynamics, sensor, control, and related artificial intelligence issues pertinent to smart robotic hands for the Extra Vehicular Activity (EVA) Retriever system are summarized and discussed. These smart hands are to be used as end effectors on arms attached to manned maneuvering units (MMU). The Retriever robotic systems comprised of MMU, arm and smart hands, are being developed to aid crewmen in the performance of routine EVA tasks including tool and object retrieval. The ultimate goal is to enhance the effectiveness of EVA crewmen
Overcoming Exploration in Reinforcement Learning with Demonstrations
Exploration in environments with sparse rewards has been a persistent problem
in reinforcement learning (RL). Many tasks are natural to specify with a sparse
reward, and manually shaping a reward function can result in suboptimal
performance. However, finding a non-zero reward is exponentially more difficult
with increasing task horizon or action dimensionality. This puts many
real-world tasks out of practical reach of RL methods. In this work, we use
demonstrations to overcome the exploration problem and successfully learn to
perform long-horizon, multi-step robotics tasks with continuous control such as
stacking blocks with a robot arm. Our method, which builds on top of Deep
Deterministic Policy Gradients and Hindsight Experience Replay, provides an
order of magnitude of speedup over RL on simulated robotics tasks. It is simple
to implement and makes only the additional assumption that we can collect a
small set of demonstrations. Furthermore, our method is able to solve tasks not
solvable by either RL or behavior cloning alone, and often ends up
outperforming the demonstrator policy.Comment: 8 pages, ICRA 201
The Sum of Its Parts: Visual Part Segmentation for Inertial Parameter Identification of Manipulated Objects
To operate safely and efficiently alongside human workers, collaborative
robots (cobots) require the ability to quickly understand the dynamics of
manipulated objects. However, traditional methods for estimating the full set
of inertial parameters rely on motions that are necessarily fast and unsafe (to
achieve a sufficient signal-to-noise ratio). In this work, we take an
alternative approach: by combining visual and force-torque measurements, we
develop an inertial parameter identification algorithm that requires slow or
'stop-and-go' motions only, and hence is ideally tailored for use around
humans. Our technique, called Homogeneous Part Segmentation (HPS), leverages
the observation that man-made objects are often composed of distinct,
homogeneous parts. We combine a surface-based point clustering method with a
volumetric shape segmentation algorithm to quickly produce a part-level
segmentation of a manipulated object; the segmented representation is then used
by HPS to accurately estimate the object's inertial parameters. To benchmark
our algorithm, we create and utilize a novel dataset consisting of realistic
meshes, segmented point clouds, and inertial parameters for 20 common workshop
tools. Finally, we demonstrate the real-world performance and accuracy of HPS
by performing an intricate 'hammer balancing act' autonomously and online with
a low-cost collaborative robotic arm. Our code and dataset are open source and
freely available.Comment: Accepted to the IEEE International Conference on Robotics and
Automation (ICRA'23), London, UK, May 29 - June 2, 202
Vision-based Robotic Grasping in Simulation using Deep Reinforcement Learning
This thesis will investigate different robotic manipulation and grasping approaches. It will present an overview of robotic simulation environments, and offer an evaluation of PyBullet, CoppeliaSim, and Gazebo, comparing various features. The thesis further presents a background for current approaches to robotic manipulation and grasping by describing how the robotic movement and grasping can be organized. State-of-the-Art approaches for learning robotic grasping, both using supervised methods and reinforcement learning methods are presented.
Two set of experiments will be conducted in PyBullet, illustrating how Deep Reinforcement Learning methods could be applied to train a 7 degrees of freedom robotic arm to grasp objects
- …