6,488 research outputs found

    User evaluation of an interactive learning framework for single-arm and dual-arm robots

    Get PDF
    The final publication is available at link.springer.comSocial robots are expected to adapt to their users and, like their human counterparts, learn from the interaction. In our previous work, we proposed an interactive learning framework that enables a user to intervene and modify a segment of the robot arm trajectory. The framework uses gesture teleoperation and reinforcement learning to learn new motions. In the current work, we compared the user experience with the proposed framework implemented on the single-arm and dual-arm Barrett’s 7-DOF WAM robots equipped with a Microsoft Kinect camera for user tracking and gesture recognition. User performance and workload were measured in a series of trials with two groups of 6 participants using two robot settings in different order for counterbalancing. The experimental results showed that, for the same task, users required less time and produced shorter robot trajectories with the single-arm robot than with the dual-arm robot. The results also showed that the users who performed the task with the single-arm robot first experienced considerably less workload in performing the task with the dual-arm robot while achieving a higher task success rate in a shorter time.Peer ReviewedPostprint (author's final draft

    Interactive Perception Based on Gaussian Process Classification for House-Hold Objects Recognition and Sorting

    Get PDF
    We present an interactive perception model for object sorting based on Gaussian Process (GP) classification that is capable of recognizing objects categories from point cloud data. In our approach, FPFH features are extracted from point clouds to describe the local 3D shape of objects and a Bag-of-Words coding method is used to obtain an object-level vocabulary representation. Multi-class Gaussian Process classification is employed to provide and probable estimation of the identity of the object and serves a key role in the interactive perception cycle – modelling perception confidence. We show results from simulated input data on both SVM and GP based multi-class classifiers to validate the recognition accuracy of our proposed perception model. Our results demonstrate that by using a GP-based classifier, we obtain true positive classification rates of up to 80%. Our semi-autonomous object sorting experiments show that the proposed GP based interactive sorting approach outperforms random sorting by up to 30% when applied to scenes comprising configurations of household objects

    Robot motion adaptation through user intervention and reinforcement learning

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/Assistant robots are designed to perform specific tasks for the user, but their performance is rarely optimal, hence they are required to adapt to user preferences or new task requirements. In the previous work, the potential of an interactive learning framework based on user intervention and reinforcement learning (RL) was assessed. The framework allowed the user to correct an unfitted segment of the robot trajectory by using hand movements to guide the robot along a corrective path. So far, only the usability of the framework was evaluated through experiments with users. In the current work, the framework is described in detail and its ability to learn from a set of sample trajectories using an RL algorithm is analyzed. To evaluate the learning performance, three versions of the framework are proposed that differ in the method used to obtain the sample trajectories, which are: human-guided learning, autonomous learning, and combined human-guided with autonomous learning. The results show that the combination of the human-guided and autonomous learning achieved the best performance, and although it needed a higher number of sample trajectories than the human-guided learning, it also required less user involvement. Autonomous learning alone obtained the lowest reward value and needed the highest number of sample trajectories.Peer ReviewedPostprint (author's final draft

    Overcoming barriers and increasing independence: service robots for elderly and disabled people

    Get PDF
    This paper discusses the potential for service robots to overcome barriers and increase independence of elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly people and advances in technology which will make new uses possible and provides suggestions for some of these new applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses the complementarity of assistive service robots and personal assistance and considers the types of applications and users for which service robots are and are not suitable

    Interactive Imitation Learning of Bimanual Movement Primitives

    Full text link
    Performing bimanual tasks with dual robotic setups can drastically increase the impact on industrial and daily life applications. However, performing a bimanual task brings many challenges, like synchronization and coordination of the single-arm policies. This article proposes the Safe, Interactive Movement Primitives Learning (SIMPLe) algorithm, to teach and correct single or dual arm impedance policies directly from human kinesthetic demonstrations. Moreover, it proposes a novel graph encoding of the policy based on Gaussian Process Regression (GPR) where the single-arm motion is guaranteed to converge close to the trajectory and then towards the demonstrated goal. Regulation of the robot stiffness according to the epistemic uncertainty of the policy allows for easily reshaping the motion with human feedback and/or adapting to external perturbations. We tested the SIMPLe algorithm on a real dual-arm setup where the teacher gave separate single-arm demonstrations and then successfully synchronized them only using kinesthetic feedback or where the original bimanual demonstration was locally reshaped to pick a box at a different height

    Flexible human-robot cooperation models for assisted shop-floor tasks

    Get PDF
    The Industry 4.0 paradigm emphasizes the crucial benefits that collaborative robots, i.e., robots able to work alongside and together with humans, could bring to the whole production process. In this context, an enabling technology yet unreached is the design of flexible robots able to deal at all levels with humans' intrinsic variability, which is not only a necessary element for a comfortable working experience for the person but also a precious capability for efficiently dealing with unexpected events. In this paper, a sensing, representation, planning and control architecture for flexible human-robot cooperation, referred to as FlexHRC, is proposed. FlexHRC relies on wearable sensors for human action recognition, AND/OR graphs for the representation of and reasoning upon cooperation models, and a Task Priority framework to decouple action planning from robot motion planning and control.Comment: Submitted to Mechatronics (Elsevier

    Physical human-robot collaboration: Robotic systems, learning methods, collaborative strategies, sensors, and actuators

    Get PDF
    This article presents a state-of-the-art survey on the robotic systems, sensors, actuators, and collaborative strategies for physical human-robot collaboration (pHRC). This article starts with an overview of some robotic systems with cutting-edge technologies (sensors and actuators) suitable for pHRC operations and the intelligent assist devices employed in pHRC. Sensors being among the essential components to establish communication between a human and a robotic system are surveyed. The sensor supplies the signal needed to drive the robotic actuators. The survey reveals that the design of new generation collaborative robots and other intelligent robotic systems has paved the way for sophisticated learning techniques and control algorithms to be deployed in pHRC. Furthermore, it revealed the relevant components needed to be considered for effective pHRC to be accomplished. Finally, a discussion of the major advances is made, some research directions, and future challenges are presented

    Online, interactive user guidance for high-dimensional, constrained motion planning

    Full text link
    We consider the problem of planning a collision-free path for a high-dimensional robot. Specifically, we suggest a planning framework where a motion-planning algorithm can obtain guidance from a user. In contrast to existing approaches that try to speed up planning by incorporating experiences or demonstrations ahead of planning, we suggest to seek user guidance only when the planner identifies that it ceases to make significant progress towards the goal. Guidance is provided in the form of an intermediate configuration q^\hat{q}, which is used to bias the planner to go through q^\hat{q}. We demonstrate our approach for the case where the planning algorithm is Multi-Heuristic A* (MHA*) and the robot is a 34-DOF humanoid. We show that our approach allows to compute highly-constrained paths with little domain knowledge. Without our approach, solving such problems requires carefully-crafting domain-dependent heuristics

    Online, interactive user guidance for high-dimensional, constrained motion planning

    Get PDF
    We consider the problem of planning a collision-free path for a high-dimensional robot. Specifically, we suggest a planning framework where a motion-planning algorithm can obtain guidance from a user. In contrast to existing approaches that try to speed up planning by incorporating experiences or demonstrations ahead of planning, we suggest to seek user guidance only when the planner identifies that it ceases to make significant progress towards the goal. Guidance is provided in the form of an intermediate configuration q^\hat{q}, which is used to bias the planner to go through q^\hat{q}. We demonstrate our approach for the case where the planning algorithm is Multi-Heuristic A* (MHA*) and the robot is a 34-DOF humanoid. We show that our approach allows to compute highly-constrained paths with little domain knowledge. Without our approach, solving such problems requires carefully-crafting domain-dependent heuristics
    • …
    corecore