10 research outputs found

    Autonomy Infused Teleoperation with Application to BCI Manipulation

    Full text link
    Robot teleoperation systems face a common set of challenges including latency, low-dimensional user commands, and asymmetric control inputs. User control with Brain-Computer Interfaces (BCIs) exacerbates these problems through especially noisy and erratic low-dimensional motion commands due to the difficulty in decoding neural activity. We introduce a general framework to address these challenges through a combination of computer vision, user intent inference, and arbitration between the human input and autonomous control schemes. Adjustable levels of assistance allow the system to balance the operator's capabilities and feelings of comfort and control while compensating for a task's difficulty. We present experimental results demonstrating significant performance improvement using the shared-control assistance framework on adapted rehabilitation benchmarks with two subjects implanted with intracortical brain-computer interfaces controlling a seven degree-of-freedom robotic manipulator as a prosthetic. Our results further indicate that shared assistance mitigates perceived user difficulty and even enables successful performance on previously infeasible tasks. We showcase the extensibility of our architecture with applications to quality-of-life tasks such as opening a door, pouring liquids from containers, and manipulation with novel objects in densely cluttered environments

    Gesteme-free context-aware adaptation of robot behavior in human–robot cooperation

    Get PDF
    Background: Cooperative robotics is receiving greater acceptance because the typical advantages provided by manipulators are combined with an intuitive usage. In particular, hands-on robotics may benefit from the adaptation of the assistant behavior with respect to the activity currently performed by the user. A fast and reliable classification of human activities is required, as well as strategies to smoothly modify the control of the manipulator. In this scenario, gesteme-based motion classification is inadequate because it needs the observation of a wide signal percentage and the definition of a rich vocabulary. Objective: In this work, a system able to recognize the user's current activity without a vocabulary of gestemes, and to accordingly adapt the manipulator's dynamic behavior is presented. Methods and material: An underlying stochastic model fits variations in the user's guidance forces and the resulting trajectories of the manipulator's end-effector with a set of Gaussian distribution. The high-level switching between these distributions is captured with hidden Markov models. The dynamic of the KUKA light-weight robot, a torque-controlled manipulator, is modified with respect to the classified activity using sigmoidal-shaped functions. The presented system is validated over a pool of 12 naive users in a scenario that addresses surgical targeting tasks on soft tissue. The robot's assistance is adapted in order to obtain a stiff behavior during activities that require critical accuracy constraint, and higher compliance during wide movements. Both the ability to provide the correct classification at each moment (sample accuracy) and the capability of correctly identify the correct sequence of activity (sequence accuracy) were evaluated. Results: The proposed classifier is fast and accurate in all the experiments conducted (80% sample accuracy after the observation of similar to 450 ms of signal). Moreover, the ability of recognize the correct sequence of activities, without unwanted transitions is guaranteed (sequence accuracy similar to 90% when computed far away from user desired transitions). Finally, the proposed activity-based adaptation of the robot's dynamic does not lead to a not smooth behavior (high smoothness, i.e. normalized jerk score <0.01). Conclusion: The provided system is able to dynamic assist the operator during cooperation in the presented scenario

    A Dynamical System Approach to Task-Adaptation in Physical Human-Robot Interaction

    Get PDF
    The goal of this work is to enable robots to intelligently and compliantly adapt their motions to the intention of a human during physical Human-Robot Interaction (pHRI) in a multi-task setting. We employ a class of parameterized dynamical systems that allows for smooth and adaptive transitions between encoded tasks. To comply with human intention, we propose a mechanism that adapts generated motions (i.e., the desired velocity) to those intended by the human user (i.e., the real velocity) thereby switching to the most similar task. We provide a rigorous analytical evaluation of our method in terms of stability, convergence, and optimality yielding an interaction behavior which is safe and intuitive for the human. We investigate our method through experimental evaluations ranging in different setups: a 3-DoF haptic device, a 7-DoF manipulator and a mobile platform

    A method for understanding and digitizing manipulation activities using programming by demonstration in robotic applications

    Get PDF
    Robots are flexible machines, where the flexibility is achieved, mainly, by the re-programming of the robotic system. To fully exploit the potential of robotic systems, an easy, fast, and intuitive programming methodology is desired. By applying such methodology, robots will be open to a wider audience of potential users (i.e. SMEs, etc.) since the need for a robotic expert in charge of programming the robot will not be needed anymore. This paper presents a Programming by Demonstration approach dealing with high-level tasks taking advantage of the ROS standard. The system identifies the different processes associated to a single-arm human manipulation activity and generates an action plan for future interpretation by the robot. The system is composed of five modules, all of them containerized and interconnected by ROS. Three of these modules are in charge of processing the manipulation data gathered by the sensors system, and converting it from the lowest level to the highest manipulation processes. In order to do this transformation, a module is used to train the system. This module generates, for each operation, an Optimized Multiorder Multivariate Markov Model, that later will be used for the operations recognition and process segmentation. Finally, the fifth module is used to interface and calibrate the system. The system was implemented and tested using a dataglove and a hand position tracker to capture the operator’s data during the manipulation. Four users and five different object types were used to train and test the system both for operations recognition and process segmentation and classification, including also the detection of the locations where the operations are performed.Peer reviewe

    Modelado de sensores piezoresistivos y uso de una interfaz basada en guantes de datos para el control de impedancia de manipuladores robóticos

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Ciencias Físicas, Departamento de Arquitectura de Computadores y Automática, leída el 21-02-2014Sección Deptal. de Arquitectura de Computadores y Automática (Físicas)Fac. de Ciencias FísicasTRUEunpu

    HUMAN-ROBOT COLLABORATION IN ROBOTIC-ASSISTED SURGICAL TRAINING

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Generative Models for Learning Robot Manipulation Skills from Humans

    Get PDF
    A long standing goal in artificial intelligence is to make robots seamlessly interact with humans in performing everyday manipulation skills. Learning from demonstrations or imitation learning provides a promising route to bridge this gap. In contrast to direct trajectory learning from demonstrations, many problems arise in interactive robotic applications that require higher contextual level understanding of the environment. This requires learning invariant mappings in the demonstrations that can generalize across different environmental situations such as size, position, orientation of objects, viewpoint of the observer, etc. In this thesis, we address this challenge by encapsulating invariant patterns in the demonstrations using probabilistic learning models for acquiring dexterous manipulation skills. We learn the joint probability density function of the demonstrations with a hidden semi-Markov model, and smoothly follow the generated sequence of states with a linear quadratic tracking controller. The model exploits the invariant segments (also termed as sub-goals, options or actions) in the demonstrations and adapts the movement in accordance with the external environmental situations such as size, position and orientation of the objects in the environment using a task-parameterized formulation. We incorporate high-dimensional sensory data for skill acquisition by parsimoniously representing the demonstrations using statistical subspace clustering methods and exploit the coordination patterns in latent space. To adapt the models on the fly and/or teach new manipulation skills online with the streaming data, we formulate a non-parametric scalable online sequence clustering algorithm with Bayesian non-parametric mixture models to avoid the model selection problem while ensuring tractability under small variance asymptotics. We exploit the developed generative models to perform manipulation skills with remotely operated vehicles over satellite communication in the presence of communication delays and limited bandwidth. A set of task-parameterized generative models are learned from the demonstrations of different manipulation skills provided by the teleoperator. The model captures the intention of teleoperator on one hand and provides assistance in performing remote manipulation tasks on the other hand under varying environmental situations. The assistance is formulated under time-independent shared control, where the model continuously corrects the remote arm movement based on the current state of the teleoperator; and/or time-dependent autonomous control, where the model synthesizes the movement of the remote arm for autonomous skill execution. Using the proposed methodology with the two-armed Baxter robot as a mock-up for semi-autonomous teleoperation, we are able to learn manipulation skills such as opening a valve, pick-and-place an object by obstacle avoidance, hot-stabbing (a specialized underwater task akin to peg-in-a-hole task), screw-driver target snapping, and tracking a carabiner in as few as 4 - 8 demonstrations. Our study shows that the proposed manipulation assistance formulations improve the performance of the teleoperator by reducing the task errors and the execution time, while catering for the environmental differences in performing remote manipulation tasks with limited bandwidth and communication delays

    Modelling of Human Control and Performance Evaluation using Artificial Neural Network and Brainwave

    Get PDF
    Conventionally, a human has to learn to operate a machine by himself / herself. Human Adaptive Mechatronics (HAM) aims to investigate a machine that has the capability to learn its operator skills in order to provide assistance and guidance appropriately. Therefore, the understanding of human behaviour during the human-machine interaction (HMI) from the machine’s side is essential. The focus of this research is to propose a model of human-machine control strategy and performance evaluation from the machine’s point of view. Various HAM simulation scenarios are developed for the investigations of the HMI. The first case study that utilises the classic pendulum-driven capsule system reveals that a human can learn to control the unfamiliar system and summarise the control strategy as a set of rules. Further investigation of the case study is conducted with nine participants to explore the performance differences and control characteristics among them. High performers tend to control the pendulum at high frequency in the right portion of the angle range while the low performers perform inconsistent control behaviour. This control information is used to develop a human-machine control model by adopting an Artificial Neural Network (ANN) and 10-time- 10-fold cross-validation. Two models of capsule direction and position predictions are obtained with 88.3% and 79.1% accuracies, respectively. An Electroencephalogram (EEG) headset is integrated into the platform for monitoring brain activity during HMI. A number of preliminary studies reveal that the brain has a specific response pattern to particular stimuli compared to normal brainwaves. A novel human-machine performance evaluation based on the EEG brainwaves is developed by utilising a classical target hitting task as a case study of HMI. Six models are obtained for the evaluation of the corresponding performance aspects including the Fitts index of performance. The averaged evaluation accuracy of the models is 72.35%. However, the accuracy drops to 65.81% when the models are applied to unseen data. In general, it can be claimed that the accuracy is satisfactory since it is very challenging to evaluate the HMI performance based only on the EEG brainwave activity
    corecore