8 research outputs found

    Understanding expertise in surgical gesture by means of Hidden Markov Models

    Get PDF
    Minimally invasive surgery (MIS) has became very widespread in the last ten years. Due to the difficulties encountered by the surgeons to learn and manage this technique, a huge importance has the improvement of training procedures, the improvement of surgical instrumentation and the robotic automation of surgical gesture. All these purposes require the analysis of surgical performance with the aim to understand it and to define what is expertise in surgical gesture. In this paper for the first time the Hidden Markov Models (HMMs) are used as a tool for the understanding of surgical performance and of the human factors that characterize it. In our experiments we used position data concerning the tools movements during exercises performed on a surgical simulator. Using Hidden Markov theory, we create a model of the expert surgeon performance able to evaluate surgical capability and to distinguish between expert and non-expert surgeons. By analyzing the trained model of the expert surgeon performance we show that it is possible to deduce information about features characterizing the surgical expertise

    Robot Learning From Human Observation Using Deep Neural Networks

    Get PDF
    Industrial robots have gained traction in the last twenty years and have become an integral component in any sector empowering automation. Specifically, the automotive industry implements a wide range of industrial robots in a multitude of assembly lines worldwide. These robots perform tasks with the utmost level of repeatability and incomparable speed. It is that speed and consistency that has always made the robotic task an upgrade over the same task completed by a human. The cost savings is a great return on investment causing corporations to automate and deploy robotic solutions wherever feasible. The cost to commission and set up is the largest deterring factor in any decision regarding robotics and automation. Currently, robots are traditionally programmed by robotic technicians, and this function is carried out in a manual process in a well-structured environment. This thesis dives into the option of eliminating the programming and commissioning portion of the robotic integration. If the environment is dynamic and can undergo various iterations of parts, changes in lighting, and part placement in the cell, then the robot will struggle to function because it is not capable of adapting to these variables. If a couple of cameras can be introduced to help capture the operator’s motions and part variability, then Learning from Demonstration (LfD) can be implemented to potentially solve this prevalent issue in today’s automotive culture. With assistance from machine learning algorithms, deep neural networks, and transfer learning technology, LfD can strive and become a viable solution. This system was developed with a robotic cell that can learn from demonstration (LfD). The proposed approach is based on computer vision to observe human actions and deep learning to perceive the demonstrator’s actions and manipulated objects

    Teaching Accommodation Task Skills: from Human Demonstration to Robot Control via Artificial Neural Networks

    Get PDF
    A simple edge-mating task, performed automatically by accommodation control, was used to study the feasibility of using data collected during a human demonstration to train an artificial neural network (ANN) to control a common robot manipulator to complete similar tasks. The 2-dimensional (planar) edge-mating task which aligns a peg normal to a fiat table served as the basis for the investigation. A simple multi-layered perceptron (MLP) ANN with a single hidden layer and linear output nodes was trained using the back-propagation algorithm with momentum. The inputs to the ANN were the planar components of the contact force between the peg and the table. The outputs from the ANN were the planar components of a commanded velocity. The controller was architected so the ANN could learn a configuration-independent solution by operating in the tool-frame coordinates. As a baseline of performance, a simple accommodation matrix capable of completing the edge- mating task was determined and implemented in simulation and on the PUMA manipulator. The accommodation matrix was also used to synthesize various forms of training data which were used to gain insights into the function and vulnerabilities of the proposed control scheme. Human demonstration data were collected using a gravity-compensated PUMA 562 manipulator and using a custom-built planar, low-impedance motion measurement system (PLIMMS)

    A framework for digitisation of manual manufacturing task knowledge using gaming interface technology

    Get PDF
    Intense market competition and the global skill supply crunch are hurting the manufacturing industry, which is heavily dependent on skilled labour. Companies must look for innovative ways to acquire manufacturing skills from their experts and transfer them to novices and eventually to machines to remain competitive. There is a lack of systematic processes in the manufacturing industry and research for cost-effective capture and transfer of human skills. Therefore, the aim of this research is to develop a framework for digitisation of manual manufacturing task knowledge, a major constituent of which is human skill. The proposed digitisation framework is based on the theory of human-workpiece interactions that is developed in this research. The unique aspect of the framework is the use of consumer-grade gaming interface technology to capture and record manual manufacturing tasks in digital form to enable the extraction, decoding and transfer of manufacturing knowledge constituents that are associated with the task. The framework is implemented, tested and refined using 5 case studies, including 1 toy assembly task, 2 real-life-like assembly tasks, 1 simulated assembly task and 1 real-life composite layup task. It is successfully validated based on the outcomes of the case studies and a benchmarking exercise that was conducted to evaluate its performance. This research contributes to knowledge in five main areas, namely, (1) the theory of human-workpiece interactions to decipher human behaviour in manual manufacturing tasks, (2) a cohesive and holistic framework to digitise manual manufacturing task knowledge, especially tacit knowledge such as human action and reaction skills, (3) the use of low-cost gaming interface technology to capture human actions and the effect of those actions on workpieces during a manufacturing task, (4) a new way to use hidden Markov modelling to produce digital skill models to represent human ability to perform complex tasks and (5) extraction and decoding of manufacturing knowledge constituents from the digital skill models
    corecore