11,262 research outputs found
Recommended from our members
A Structured Approach for In-Hand Manipulation
In-hand manipulations consist of dexterous motions that come easy to humans but still pose a challenge to robotic systems. It is difficult to control finger motions in long complicated sequences due to high DOFs and intricate contact interactions. For such complex motions, in-hand manipulations have generally been broken into a hierarchy of low and high level control. In this case, high level control sequences low level controllers to perform motion primitives. The low level motion primitives tend to be task dependent and do not always uniformly sample from the manipulation space. This narrows its scope and makes it difficult to adapt to other in-hand manipulation tasks. Another technique that has widely proven to be promising is reinforcement learning (RL) based controllers. These controllers are able to perform a complex set of motions. However, applying RL directly to complicated in-hand manipulations limits what can be generalized to other tasks. This thesis focuses on using a set of motion primitives that are task agnostic, sampled uniformly and symmetrically from the manipulation space to provide a structured approach for in-hand manipulation.
Specifically, we design two low level controllers (an inverse kinematics controller and a reinforcement learning controller) that can perform primitive in-hand manipulations. We further assess the ability of the reinforcement learning controller to adapt to other, unseen primitive manipulations. Finally, we show that structuring complex manipulations by staging low level controllers to learn from a uniform, symmetric, task agnostic set of motion primitives can adapt better to more tasks as compared to using RL for an end-end manipulation task
Vision-Based Multi-Task Manipulation for Inexpensive Robots Using End-To-End Learning from Demonstration
We propose a technique for multi-task learning from demonstration that trains
the controller of a low-cost robotic arm to accomplish several complex picking
and placing tasks, as well as non-prehensile manipulation. The controller is a
recurrent neural network using raw images as input and generating robot arm
trajectories, with the parameters shared across the tasks. The controller also
combines VAE-GAN-based reconstruction with autoregressive multimodal action
prediction. Our results demonstrate that it is possible to learn complex
manipulation tasks, such as picking up a towel, wiping an object, and
depositing the towel to its previous position, entirely from raw images with
direct behavior cloning. We show that weight sharing and reconstruction-based
regularization substantially improve generalization and robustness, and
training on multiple tasks simultaneously increases the success rate on all
tasks
Towards Error Handling in a DSL for Robot Assembly Tasks
This work-in-progress paper presents our work with a domain specific language
(DSL) for tackling the issue of programming robots for small-sized batch
production. We observe that as the complexity of assembly increases so does the
likelihood of errors, and these errors need to be addressed. Nevertheless, it
is essential that programming and setting up the assembly remains fast, allows
quick changeovers, easy adjustments and reconfigurations. In this paper we
present an initial design and implementation of extending an existing DSL for
assembly operations with error specification, error handling and advanced move
commands incorporating error tolerance. The DSL is used as part of a framework
that aims at tackling uncertainties through a probabilistic approach.Comment: Presented at DSLRob 2014 (arXiv:cs/1411.7148
- …