5 research outputs found
Learning Task Constraints from Demonstration for Hybrid Force/Position Control
We present a novel method for learning hybrid force/position control from
demonstration. We learn a dynamic constraint frame aligned to the direction of
desired force using Cartesian Dynamic Movement Primitives. In contrast to
approaches that utilize a fixed constraint frame, our approach easily
accommodates tasks with rapidly changing task constraints over time. We
activate only one degree of freedom for force control at any given time,
ensuring motion is always possible orthogonal to the direction of desired
force. Since we utilize demonstrated forces to learn the constraint frame, we
are able to compensate for forces not detected by methods that learn only from
the demonstrated kinematic motion, such as frictional forces between the
end-effector and the contact surface. We additionally propose novel extensions
to the Dynamic Movement Primitive (DMP) framework that encourage robust
transition from free-space motion to in-contact motion in spite of environment
uncertainty. We incorporate force feedback and a dynamically shifting goal to
reduce forces applied to the environment and retain stable contact while
enabling force control. Our methods exhibit low impact forces on contact and
low steady-state tracking error.Comment: Under revie
Robot Learning from Demonstration in Robotic Assembly: A Survey
Learning from demonstration (LfD) has been used to help robots to implement manipulation tasks autonomously, in particular, to learn manipulation behaviors from observing the motion executed by human demonstrators. This paper reviews recent research and development in the field of LfD. The main focus is placed on how to demonstrate the example behaviors to the robot in assembly operations, and how to extract the manipulation features for robot learning and generating imitative behaviors. Diverse metrics are analyzed to evaluate the performance of robot imitation learning. Specifically, the application of LfD in robotic assembly is a focal point in this paper
Robot Learning From Human Observation Using Deep Neural Networks
Industrial robots have gained traction in the last twenty years and have become an integral component in any sector empowering automation. Specifically, the automotive industry implements a wide range of industrial robots in a multitude of assembly lines worldwide. These robots perform tasks with the utmost level of repeatability and incomparable speed. It is that speed and consistency that has always made the robotic task an upgrade over the same task completed by a human. The cost savings is a great return on investment causing corporations to automate and deploy robotic solutions wherever feasible.
The cost to commission and set up is the largest deterring factor in any decision regarding robotics and automation. Currently, robots are traditionally programmed by robotic technicians, and this function is carried out in a manual process in a well-structured environment. This thesis dives into the option of eliminating the programming and commissioning portion of the robotic integration. If the environment is dynamic and can undergo various iterations of parts, changes in lighting, and part placement in the cell, then the robot will struggle to function because it is not capable of adapting to these variables.
If a couple of cameras can be introduced to help capture the operator’s motions and part variability, then Learning from Demonstration (LfD) can be implemented to potentially solve this prevalent issue in today’s automotive culture. With assistance from machine learning algorithms, deep neural networks, and transfer learning technology, LfD can strive and become a viable solution. This system was developed with a robotic cell that can learn from demonstration (LfD). The proposed approach is based on computer vision to observe human actions and deep learning to perceive the demonstrator’s actions and manipulated objects