1,009 research outputs found
Learning Sensor Feedback Models from Demonstrations via Phase-Modulated Neural Networks
In order to robustly execute a task under environmental uncertainty, a robot
needs to be able to reactively adapt to changes arising in its environment. The
environment changes are usually reflected in deviation from expected sensory
traces. These deviations in sensory traces can be used to drive the motion
adaptation, and for this purpose, a feedback model is required. The feedback
model maps the deviations in sensory traces to the motion plan adaptation. In
this paper, we develop a general data-driven framework for learning a feedback
model from demonstrations. We utilize a variant of a radial basis function
network structure --with movement phases as kernel centers-- which can
generally be applied to represent any feedback models for movement primitives.
To demonstrate the effectiveness of our framework, we test it on the task of
scraping on a tilt board. In this task, we are learning a reactive policy in
the form of orientation adaptation, based on deviations of tactile sensor
traces. As a proof of concept of our method, we provide evaluations on an
anthropomorphic robot. A video demonstrating our approach and its results can
be seen in https://youtu.be/7Dx5imy1KcwComment: 8 pages, accepted to be published at the International Conference on
Robotics and Automation (ICRA) 201
Exploitation of environmental constraints in human and robotic grasping
Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.We investigate the premise that robust grasping performance is enabled by exploiting constraints present in the environment. These constraints, leveraged through motion in contact, counteract uncertainty in state variables relevant to grasp success. Given this premise, grasping becomes a process of successive exploitation of environmental constraints, until a successful grasp has been established. We present support for this view found through the analysis of human grasp behavior and by showing robust robotic grasping based on constraint-exploiting grasp strategies. Furthermore, we show that it is possible to design robotic hands with inherent capabilities for the exploitation of environmental constraints
Exploitation of environmental constraints in human and robotic grasping
Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.We investigate the premise that robust grasping performance is enabled by exploiting constraints present in the environment. These constraints, leveraged through motion in contact, counteract uncertainty in state variables relevant to grasp success. Given this premise, grasping becomes a process of successive exploitation of environmental constraints, until a successful grasp has been established. We present support for this view found through the analysis of human grasp behavior and by showing robust robotic grasping based on constraint-exploiting grasp strategies. Furthermore, we show that it is possible to design robotic hands with inherent capabilities for the exploitation of environmental constraints
A survey of robot manipulation in contact
In this survey, we present the current status on robots performing manipulation tasks that require varying contact with the environment, such that the robot must either implicitly or explicitly control the contact force with the environment to complete the task. Robots can perform more and more manipulation tasks that are still done by humans, and there is a growing number of publications on the topics of (1) performing tasks that always require contact and (2) mitigating uncertainty by leveraging the environment in tasks that, under perfect information, could be performed without contact. The recent trends have seen robots perform tasks earlier left for humans, such as massage, and in the classical tasks, such as peg-in-hole, there is a more efficient generalization to other similar tasks, better error tolerance, and faster planning or learning of the tasks. Thus, in this survey we cover the current stage of robots performing such tasks, starting from surveying all the different in-contact tasks robots can perform, observing how these tasks are controlled and represented, and finally presenting the learning and planning of the skills required to complete these tasks
Recommended from our members
Sensing and Control for Robust Grasping with Simple Hardware
Robots can move, see, and navigate in the real world outside carefully structured factories, but they cannot yet grasp and manipulate objects without human intervention. Two key barriers are the complexity of current approaches, which require complicated hardware or precise perception to function effectively, and the challenge of understanding system performance in a tractable manner given the wide range of factors that impact successful grasping. This thesis presents sensors and simple control algorithms that relax the requirements on robot hardware, and a framework to understand the capabilities and limitations of grasping systems.Engineering and Applied Science
Control strategies for cleaning robots in domestic applications: A comprehensive review:
Service robots are built and developed for various applications to support humans as companion, caretaker, or domestic support. As the number of elderly people grows, service robots will be in increasing demand. Particularly, one of the main tasks performed by elderly people, and others, is the complex task of cleaning. Therefore, cleaning tasks, such as sweeping floors, washing dishes, and wiping windows, have been developed for the domestic environment using service robots or robot manipulators with several control approaches. This article is primarily focused on control methodology used for cleaning tasks. Specifically, this work mainly discusses classical control and learning-based controlled methods. The classical control approaches, which consist of position control, force control, and impedance control , are commonly used for cleaning purposes in a highly controlled environment. However, classical control methods cannot be generalized for cluttered environment so that learning-based control methods could be an alternative solution. Learning-based control methods for cleaning tasks can encompass three approaches: learning from demonstration (LfD), supervised learning (SL), and reinforcement learning (RL). These control approaches have their own capabilities to generalize the cleaning tasks in the new environment. For example, LfD, which many research groups have used for cleaning tasks, can generate complex cleaning trajectories based on human demonstration. Also, SL can support the prediction of dirt areas and cleaning motion using large number of data set. Finally, RL can learn cleaning actions and interact with the new environment by the robot itself. In this context, this article aims to provide a general overview of robotic cleaning tasks based on different types of control methods using manipulator. It also suggest a description of the future directions of cleaning tasks based on the evaluation of the control approaches
Dexterous manipulation of unknown objects using virtual contact points
The manipulation of unknown objects is a problem of special interest in robotics since it is not always possible to have exact models of the objects with which the robot interacts. This paper presents a simple strategy to manipulate unknown objects using a robotic hand equipped with tactile sensors. The hand configurations that allow the rotation of an unknown object are computed using only tactile and kinematic information, obtained during the manipulation process and reasoning about the desired and real positions of the fingertips during the manipulation. This is done taking into account that the desired positions of the fingertips are not physically reachable since they are located in the interior of the manipulated object and therefore they are virtual positions with associated virtual contact points. The proposed approach was satisfactorily validated using three fingers of an anthropomorphic robotic hand (Allegro Hand), with the original fingertips replaced by tactile sensors (WTS-FT). In the experimental validation, several everyday objects with different shapes were successfully manipulated, rotating them without the need of knowing their shape or any other physical property.Peer ReviewedPostprint (author's final draft
In-Hand Object Stabilization by Independent Finger Control
Grip control during robotic in-hand manipulation is usually modeled as part
of a monolithic task, relying on complex controllers specialized for specific
situations. Such approaches do not generalize well and are difficult to apply
to novel manipulation tasks. Here, we propose a modular object stabilization
method based on a proposition that explains how humans achieve grasp stability.
In this bio-mimetic approach, independent tactile grip stabilization
controllers ensure that slip does not occur locally at the engaged robot
fingers. Such local slip is predicted from the tactile signals of each
fingertip sensor i.e., BioTac and BioTac SP by Syntouch. We show that stable
grasps emerge without any form of central communication when such independent
controllers are engaged in the control of multi-digit robotic hands. These
grasps are resistant to external perturbations while being capable of
stabilizing a large variety of objects.Comment: Submitted to IEEE Transactions on Robotics Journal. arXiv admin note:
text overlap with arXiv:1612.0820
- …