1,740 research outputs found

    The case for the development of novel human skills capture methodologies

    Get PDF
    As the capabilities of industrial automation are growing, so is the ability to supplement or replace the more tacit, cognitive skills of manual operators. Whilst models have been published within the human factors literature regarding automation implementation, they neglect to discuss the initial capture of the task and automation experts currently lack a formal tool to assess feasibility. The definition of what is meant by "human skill" is discussed and three crucial theoretical underpinnings are proposed for a novel, automation-specific skill capture methodology: emphasis upon procedural rules, emphasis upon action-facilitating factors and taxonomy of skill

    A Novel Haptic Simulator for Evaluating and Training Salient Force-Based Skills for Laparoscopic Surgery

    Get PDF
    Laparoscopic surgery has evolved from an \u27alternative\u27 surgical technique to currently being considered as a mainstream surgical technique. However, learning this complex technique holds unique challenges to novice surgeons due to their \u27distance\u27 from the surgical site. One of the main challenges in acquiring laparoscopic skills is the acquisition of force-based or haptic skills. The neglect of popular training methods (e.g., the Fundamentals of Laparoscopic Surgery, i.e. FLS, curriculum) in addressing this aspect of skills training has led many medical skills professionals to research new, efficient methods for haptic skills training. The overarching goal of this research was to demonstrate that a set of simple, simulator-based haptic exercises can be developed and used to train users for skilled application of forces with surgical tools. A set of salient or core haptic skills that underlie proficient laparoscopic surgery were identified, based on published time-motion studies. Low-cost, computer-based haptic training simulators were prototyped to simulate each of the identified salient haptic skills. All simulators were tested for construct validity by comparing surgeons\u27 performance on the simulators with the performance of novices with no previous laparoscopic experience. An integrated, \u27core haptic skills\u27 simulator capable of rendering the three validated haptic skills was built. To examine the efficacy of this novel salient haptic skills training simulator, novice participants were tested for training improvements in a detailed study. Results from the study demonstrated that simulator training enabled users to significantly improve force application for all three haptic tasks. Research outcomes from this project could greatly influence surgical skills simulator design, resulting in more efficient training

    Human skill capturing and modelling using wearable devices

    Get PDF
    Industrial robots are delivering more and more manipulation services in manufacturing. However, when the task is complex, it is difficult to programme a robot to fulfil all the requirements because even a relatively simple task such as a peg-in-hole insertion contains many uncertainties, e.g. clearance, initial grasping position and insertion path. Humans, on the other hand, can deal with these variations using their vision and haptic feedback. Although humans can adapt to uncertainties easily, most of the time, the skilled based performances that relate to their tacit knowledge cannot be easily articulated. Even though the automation solution may not fully imitate human motion since some of them are not necessary, it would be useful if the skill based performance from a human could be firstly interpreted and modelled, which will then allow it to be transferred to the robot. This thesis aims to reduce robot programming efforts significantly by developing a methodology to capture, model and transfer the manual manufacturing skills from a human demonstrator to the robot. Recently, Learning from Demonstration (LfD) is gaining interest as a framework to transfer skills from human teacher to robot using probability encoding approaches to model observations and state transition uncertainties. In close or actual contact manipulation tasks, it is difficult to reliabley record the state-action examples without interfering with the human senses and activities. Therefore, wearable sensors are investigated as a promising device to record the state-action examples without restricting the human experts during the skilled execution of their tasks. Firstly to track human motions accurately and reliably in a defined 3-dimensional workspace, a hybrid system of Vicon and IMUs is proposed to compensate for the known limitations of the individual system. The data fusion method was able to overcome occlusion and frame flipping problems in the two camera Vicon setup and the drifting problem associated with the IMUs. The results indicated that occlusion and frame flipping problems associated with Vicon can be mitigated by using the IMU measurements. Furthermore, the proposed method improves the Mean Square Error (MSE) tracking accuracy range from 0.8˚ to 6.4˚ compared with the IMU only method. Secondly, to record haptic feedback from a teacher without physically obstructing their interactions with the workpiece, wearable surface electromyography (sEMG) armbands were used as an indirect method to indicate contact feedback during manual manipulations. A muscle-force model using a Time Delayed Neural Network (TDNN) was built to map the sEMG signals to the known contact force. The results indicated that the model was capable of estimating the force from the sEMG armbands in the applications of interest, namely in peg-in-hole and beater winding tasks, with MSE of 2.75N and 0.18N respectively. Finally, given the force estimation and the motion trajectories, a Hidden Markov Model (HMM) based approach was utilised as a state recognition method to encode and generalise the spatial and temporal information of the skilled executions. This method would allow a more representative control policy to be derived. A modified Gaussian Mixture Regression (GMR) method was then applied to enable motions reproduction by using the learned state-action policy. To simplify the validation procedure, instead of using the robot, additional demonstrations from the teacher were used to verify the reproduction performance of the policy, by assuming human teacher and robot learner are physical identical systems. The results confirmed the generalisation capability of the HMM model across a number of demonstrations from different subjects; and the reproduced motions from GMR were acceptable in these additional tests. The proposed methodology provides a framework for producing a state-action model from skilled demonstrations that can be translated into robot kinematics and joint states for the robot to execute. The implication to industry is reduced efforts and time in programming the robots for applications where human skilled performances are required to cope robustly with various uncertainties during tasks execution

    Interactive multimodal Path Planning in immersion

    Get PDF
    Recent studies have defined interactive path plan- ners for simulations involving a human operator. Such path planners enable a human operator to share control with an automatic planner and are based on Robotics and Virtual Reality (VR) methods. This paper proposes a novel architecture for this interactive planner. It enhances interaction with the user by adding topological and semantic representations to the purely geometric model traditionally used

    Task analysis of discrete and continuous skills: a dual methodology approach to human skills capture for automation

    Get PDF
    There is a growing requirement within the field of intelligent automation for a formal methodology to capture and classify explicit and tacit skills deployed by operators during complex task performance. This paper describes the development of a dual methodology approach which recognises the inherent differences between continuous tasks and discrete tasks and which proposes separate methodologies for each. Both methodologies emphasise capturing operators’ physical, perceptual, and cognitive skills, however, they fundamentally differ in their approach. The continuous task analysis recognises the non-arbitrary nature of operation ordering and that identifying suitable cues for subtask is a vital component of the skill. Discrete task analysis is a more traditional, chronologically ordered methodology and is intended to increase the resolution of skill classification and be practical for assessing complex tasks involving multiple unique subtasks through the use of taxonomy of generic actions for physical, perceptual, and cognitive actions

    Continuous Operator Authentication for Teleoperated Systems Using Hidden Markov Models [post-print]

    Get PDF
    In this article, we present a novel approach for continuous operator authentication in teleoperated robotic processes based on Hidden Markov Models (HMM). While HMMs were originally developed and widely used in speech recognition, they have shown great performance in human motion and activity modeling. We make an analogy between human language and teleoperated robotic processes (i.e., words are analogous to a teleoperator\u27s gestures, sentences are analogous to the entire teleoperated task or process) and implement HMMs to model the teleoperated task. To test the continuous authentication performance of the proposed method, we conducted two sets of analyses. We built a virtual reality (VR) experimental environment using a commodity VR headset (HTC Vive) and haptic feedback enabled controller (Sensable PHANToM Omni) to simulate a real teleoperated task. An experimental study with 10 subjects was then conducted. We also performed simulated continuous operator authentication by using the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS). The performance of the model was evaluated based on the continuous (real-time) operator authentication accuracy as well as resistance to a simulated impersonation attack. The results suggest that the proposed method is able to achieve 70% (VR experiment) and 81% (JIGSAWS dataset) continuous classification accuracy with as short as a 1-second sample window. It is also capable of detecting an impersonation attack in real-time
    corecore