15,176 research outputs found

    Interactive Task Encoding System for Learning-from-Observation

    Full text link
    We introduce a practical pipeline that interactively encodes multimodal human demonstrations for robot teaching. This pipeline is designed as an input system for a framework called Learning-from-Observation (LfO), which aims to program household robots with manipulative tasks through few-shots human demonstration without coding. While most previous LfO systems run with visual demonstration, recent research on robot teaching has shown the effectiveness of verbal instruction in making recognition robust and teaching interactive. To the best of our knowledge, however, no LfO system has yet been proposed that utilizes both verbal instruction and interaction, namely \textit{multimodal LfO}. This paper proposes the interactive task encoding system (ITES) as an input pipeline for multimodal LfO. ITES assumes that the user teaches step-by-step, pausing hand movements in order to match the granularity of human instructions with the granularity of robot execution. ITES recognizes tasks based on step-by-step verbal instructions that accompany the hand movements. Additionally, the recognition is made robust through interactions with the user. We test ITES on a real robot and show that the user can successfully teach multiple operations through multimodal demonstrations. The results suggest the usefulness of ITES for multimodal LfO. The source code is available at https://github.com/microsoft/symbolic-robot-teaching-interface.Comment: 7 pages, 10 figures. Last updated January 24st, 202

    A Posture Sequence Learning System for an Anthropomorphic Robotic Hand

    Get PDF
    The paper presents a cognitive architecture for posture learning of an anthropomorphic robotic hand. Our approach is aimed to allow the robotic system to perform complex perceptual operations, to interact with a human user and to integrate the perceptions by a cognitive representation of the scene and the observed actions. The anthropomorphic robotic hand imitates the gestures acquired by the vision system in order to learn meaningful movements, to build its knowledge by different conceptual spaces and to perform complex interaction with the human operator

    A framework for digitisation of manual manufacturing task knowledge using gaming interface technology

    Get PDF
    Intense market competition and the global skill supply crunch are hurting the manufacturing industry, which is heavily dependent on skilled labour. Companies must look for innovative ways to acquire manufacturing skills from their experts and transfer them to novices and eventually to machines to remain competitive. There is a lack of systematic processes in the manufacturing industry and research for cost-effective capture and transfer of human skills. Therefore, the aim of this research is to develop a framework for digitisation of manual manufacturing task knowledge, a major constituent of which is human skill. The proposed digitisation framework is based on the theory of human-workpiece interactions that is developed in this research. The unique aspect of the framework is the use of consumer-grade gaming interface technology to capture and record manual manufacturing tasks in digital form to enable the extraction, decoding and transfer of manufacturing knowledge constituents that are associated with the task. The framework is implemented, tested and refined using 5 case studies, including 1 toy assembly task, 2 real-life-like assembly tasks, 1 simulated assembly task and 1 real-life composite layup task. It is successfully validated based on the outcomes of the case studies and a benchmarking exercise that was conducted to evaluate its performance. This research contributes to knowledge in five main areas, namely, (1) the theory of human-workpiece interactions to decipher human behaviour in manual manufacturing tasks, (2) a cohesive and holistic framework to digitise manual manufacturing task knowledge, especially tacit knowledge such as human action and reaction skills, (3) the use of low-cost gaming interface technology to capture human actions and the effect of those actions on workpieces during a manufacturing task, (4) a new way to use hidden Markov modelling to produce digital skill models to represent human ability to perform complex tasks and (5) extraction and decoding of manufacturing knowledge constituents from the digital skill models

    Remote real-time collaboration through synchronous exchange of digitised human-workpiece interactions

    Get PDF
    In this highly globalised manufacturing ecosystem, product design and verification activities, production and inspection processes, and technical support services are spread across global supply chains and customer networks. Therefore, collaborative infrastructures that enable global teams to collaborate with each other in real-time in performing complex manufacturing-related tasks is highly desirable. This work demonstrates the design and implementation of a remote real-time collaboration platform by using human motion capture technology powered by infrared light based depth imaging sensors and a synchronous data transfer protocol from computer networks. The unique functionality of the proposed platform is the sharing of physical contexts during a collaboration session by not only exchanging human actions but also the effects of those actions on the workpieces and the task environment. Results show that this platform could enable teams to remotely work on a common engineering problem at the same time and also get immediate feedback from each other making it valuable for collaborative design, inspection and verifications tasks in the factories of the future. An additional benefit of the implemented platform is its use of low cost off the shelf equipment thereby making it accessible to SMEs that are connected to larger organisations via complex supply chains

    Computer-simulated environment for training : challenge of efficacy evaluation

    Full text link
    Computer-assisted instruction has been around for decades. There has been much speculation about the benefits of computer-mediated learning. Numerous applications have been developed in different domains incorporated with emerging technologies. In recently years, advanced technologies, such as Augmented Reality (AR) and Virtual Reality (VR), have received much attention in their potential of creating interactive learning experience for the users. However, related literature and empirical studies indicated that learning effects in computer-simulated environments or Virtual Environments (VEs) are not systematically tested. Furthermore, the performance and learning in computer-simulated learning environment need to be evaluated through more rigorous methods. This paper suggests that 1) the efficacy of VEs is subject to a close examination, not only in terms of how VE-based training systems are easy of use, but also in terms of how effective learning is; 2) evaluation of learning in computer simulated learning environments is required to be reconsidered in terms of theoretical basis and evaluation methodologies that are relevant to the measurement of training effectiveness in computer-simulated virtual learning environment. This paper explains on how learning can be assessed in VEs through the lens of training evaluation.<br /
    corecore