4,576 research outputs found
Recommended from our members
Iterative learning of human partner's desired trajectory for proactive human-robot collaboration
A period-varying iterative learning control scheme is proposed for a robotic manipulator to learn a target trajectory that is planned by a human partner but unknown to the robot, which is a typical scenario in many applications. The proposed method updates the robot’s reference trajectory in an iterative manner to minimize the interaction force applied by the human. Although a repetitive human–robot collaboration task is considered, the task period is subject to uncertainty introduced by the human. To address this issue, a novel learning mechanism is proposed to achieve the control objective. Theoretical analysis is performed to prove the performance of the learning algorithm and robot controller. Selective simulations and experiments on a robotic arm are carried out to show the effectiveness of the proposed method in human–robot collaboration
MaestROB: A Robotics Framework for Integrated Orchestration of Low-Level Control and High-Level Reasoning
This paper describes a framework called MaestROB. It is designed to make the
robots perform complex tasks with high precision by simple high-level
instructions given by natural language or demonstration. To realize this, it
handles a hierarchical structure by using the knowledge stored in the forms of
ontology and rules for bridging among different levels of instructions.
Accordingly, the framework has multiple layers of processing components;
perception and actuation control at the low level, symbolic planner and Watson
APIs for cognitive capabilities and semantic understanding, and orchestration
of these components by a new open source robot middleware called Project Intu
at its core. We show how this framework can be used in a complex scenario where
multiple actors (human, a communication robot, and an industrial robot)
collaborate to perform a common industrial task. Human teaches an assembly task
to Pepper (a humanoid robot from SoftBank Robotics) using natural language
conversation and demonstration. Our framework helps Pepper perceive the human
demonstration and generate a sequence of actions for UR5 (collaborative robot
arm from Universal Robots), which ultimately performs the assembly (e.g.
insertion) task.Comment: IEEE International Conference on Robotics and Automation (ICRA) 2018.
Video: https://www.youtube.com/watch?v=19JsdZi0TW
Beating-time gestures imitation learning for humanoid robots
Beating-time gestures are movement patterns of the hand swaying along with music, thereby indicating accented musical pulses. The spatiotemporal configuration of these patterns makes it diÿcult to analyse and model them. In this paper we present an innovative modelling approach that is based upon imitation learning or Programming by Demonstration (PbD). Our approach - based on Dirichlet Process Mixture Models, Hidden Markov Models, Dynamic Time Warping, and non-uniform cubic spline regression - is particularly innovative as it handles spatial and temporal variability by the generation of a generalised trajectory from a set of periodically repeated movements. Although not within the scope of our study, our procedures may be implemented for the sake of controlling movement behaviour of robots and avatar animations in response to music
Development of a methodology for the human-robot interaction based on vision systems for collaborative robotics
L'abstract è presente nell'allegato / the abstract is in the attachmen
GoferBot: A Visual Guided Human-Robot Collaborative Assembly System
The current transformation towards smart manufacturing has led to a growing
demand for human-robot collaboration (HRC) in the manufacturing process.
Perceiving and understanding the human co-worker's behaviour introduces
challenges for collaborative robots to efficiently and effectively perform
tasks in unstructured and dynamic environments. Integrating recent data-driven
machine vision capabilities into HRC systems is a logical next step in
addressing these challenges. However, in these cases, off-the-shelf components
struggle due to generalisation limitations. Real-world evaluation is required
in order to fully appreciate the maturity and robustness of these approaches.
Furthermore, understanding the pure-vision aspects is a crucial first step
before combining multiple modalities in order to understand the limitations. In
this paper, we propose GoferBot, a novel vision-based semantic HRC system for a
real-world assembly task. It is composed of a visual servoing module that
reaches and grasps assembly parts in an unstructured multi-instance and dynamic
environment, an action recognition module that performs human action prediction
for implicit communication, and a visual handover module that uses the
perceptual understanding of human behaviour to produce an intuitive and
efficient collaborative assembly experience. GoferBot is a novel assembly
system that seamlessly integrates all sub-modules by utilising implicit
semantic information purely from visual perception
- …