3 research outputs found
Interactive Imitation Learning of Bimanual Movement Primitives
Performing bimanual tasks with dual robotic setups can drastically increase
the impact on industrial and daily life applications. However, performing a
bimanual task brings many challenges, like synchronization and coordination of
the single-arm policies. This article proposes the Safe, Interactive Movement
Primitives Learning (SIMPLe) algorithm, to teach and correct single or dual arm
impedance policies directly from human kinesthetic demonstrations. Moreover, it
proposes a novel graph encoding of the policy based on Gaussian Process
Regression (GPR) where the single-arm motion is guaranteed to converge close to
the trajectory and then towards the demonstrated goal. Regulation of the robot
stiffness according to the epistemic uncertainty of the policy allows for
easily reshaping the motion with human feedback and/or adapting to external
perturbations. We tested the SIMPLe algorithm on a real dual-arm setup where
the teacher gave separate single-arm demonstrations and then successfully
synchronized them only using kinesthetic feedback or where the original
bimanual demonstration was locally reshaped to pick a box at a different
height
Intuitive Instruction of Industrial Robots : A Knowledge-Based Approach
With more advanced manufacturing technologies, small and medium sized enterprises can compete with low-wage labor by providing customized and high quality products. For small production series, robotic systems can provide a cost-effective solution. However, for robots to be able to perform on par with human workers in manufacturing industries, they must become flexible and autonomous in their task execution and swift and easy to instruct. This will enable small businesses with short production series or highly customized products to use robot coworkers without consulting expert robot programmers. The objective of this thesis is to explore programming solutions that can reduce the programming effort of sensor-controlled robot tasks. The robot motions are expressed using constraints, and multiple of simple constrained motions can be combined into a robot skill. The skill can be stored in a knowledge base together with a semantic description, which enables reuse and reasoning. The main contributions of the thesis are 1) development of ontologies for knowledge about robot devices and skills, 2) a user interface that provides simple programming of dual-arm skills for non-experts and experts, 3) a programming interface for task descriptions in unstructured natural language in a user-specified vocabulary and 4) an implementation where low-level code is generated from the high-level descriptions. The resulting system greatly reduces the number of parameters exposed to the user, is simple to use for non-experts and reduces the programming time for experts by 80%. The representation is described on a semantic level, which means that the same skill can be used on different robot platforms. The research is presented in seven papers, the first describing the knowledge representation and the second the knowledge-based architecture that enables skill sharing between robots. The third paper presents the translation from high-level instructions to low-level code for force-controlled motions. The two following papers evaluate the simplified programming prototype for non-expert and expert users. The last two present how program statements are extracted from unstructured natural language descriptions