130 research outputs found

    Measuring the impact of haptic feedback in collaborative robotic scenarios

    Get PDF
    [EN] In recent years, the interaction of a human operator with teleoperated robotic systems has been much improved. One of the factors influencing this improvement is the addition of force feedback to complement the visual feedback provided by traditional graphical user interfaces. However, the users of these systems performing tasks in isolated and safe environments are often inexperienced and occasional users. In addition, there is no common framework to assess the usability of these systems, due to the heterogeneity of applications and tasks, and therefore, there is a need for new usability assessment methods that are not domain specific. This study addresses this issue by proposing a measure of usability that includes five variables: user efficiency, user effectiveness, mental workload, perceived usefulness, and perceived ease of use. The empirical analysis shows that the integration of haptic feedback improves the usability of these systems for non-expert users, even though the differences are not statistically significant; further, the results suggest that mental workload is higher when haptic feedback is added. The analysis also reveals significant differences between participants depending on gender.SIPublicación en abierto financiada por el Consorcio de Bibliotecas Universitarias de Castilla y León (BUCLE), con cargo al Programa Operativo 2014ES16RFOP009 FEDER 2014-2020 DE CASTILLA Y LEÓN, Actuación:20007-CL - Apoyo Consorcio BUCL

    Design and evaluation of a graphical user interface for facilitating expert knowledge transfer: a teleoperation case study

    Get PDF
    Nowadays, teleoperation systems are increasingly used for the training of specific skills to carry out complex tasks in dangerous environments. One of the challenges of these systems is to ensure that the time it takes for users to acquire these skills is as short as possible. For this, the user interface must be intuitive and easy to use. This document describes the design and evaluation of a graphical user interface so that a non-expert user could use a teleoperated system intuitively and without excessive training time. To achieve our goal, we use a user-centered design process model. To evaluate the interface, we use our own methodology and the results allow improving its usability.Peer ReviewedPostprint (author's final draft

    User Experience Enchanced Interface ad Controller Design for Human-Robot Interaction

    Get PDF
    The robotic technologies have been well developed recently in various fields, such as medical services, industrial manufacture and aerospace. Despite their rapid development, how to deal with the uncertain envi-ronment during human-robot interactions effectively still remains un-resolved. The current artificial intelligence (AI) technology does not support robots to fulfil complex tasks without human’s guidance. Thus, teleoperation, which means remotely controlling a robot by a human op-erator, is indispensable in many scenarios. It is an important and useful tool in research fields. This thesis focuses on the study of designing a user experience (UX) enhanced robot controller, and human-robot in-teraction interfaces that try providing human operators an immersion perception of teleoperation. Several works have been done to achieve the goal.First, to control a telerobot smoothly, a customised variable gain con-trol method is proposed where the stiffness of the telerobot varies with the muscle activation level extracted from signals collected by the surface electromyograph(sEMG) devices. Second, two main works are conducted to improve the user-friendliness of the interaction interfaces. One is that force feedback is incorporated into the framework providing operators with haptic feedback to remotely manipulate target objects. Given the high cost of force sensor, in this part of work, a haptic force estimation algorithm is proposed where force sensor is no longer needed. The other main work is developing a visual servo control system, where a stereo camera is mounted on the head of a dual arm robots offering operators real-time working situations. In order to compensate the internal and ex-ternal uncertainties and accurately track the stereo camera’s view angles along planned trajectories, a deterministic learning techniques is utilised, which enables reusing the learnt knowledge before current dynamics changes and thus features increasing the learning efficiency. Third, in-stead of sending commands to the telerobts by joy-sticks, keyboards or demonstrations, the telerobts are controlled directly by the upper limb motion of the human operator in this thesis. Algorithm that utilised the motion signals from inertial measurement unit (IMU) sensor to captures humans’ upper limb motion is designed. The skeleton of the operator is detected by Kinect V2 and then transformed and mapped into the joint positions of the controlled robot arm. In this way, the upper limb mo-tion signals from the operator is able to act as reference trajectories to the telerobts. A more superior neural networks (NN) based trajectory controller is also designed to track the generated reference trajectory. Fourth, to further enhance the human immersion perception of teleop-eration, the virtual reality (VR) technique is incorporated such that the operator can make interaction and adjustment of robots easier and more accurate from a robot’s perspective.Comparative experiments have been performed to demonstrate the effectiveness of the proposed design scheme. Tests with human subjects were also carried out for evaluating the interface design

    Teleoperation control based on combination of wave variable and neural networks

    Get PDF
    In this paper, a novel control scheme is developed for a teleoperation system, combining the radial basis function (RBF) neural networks (NNs) and wave variable technique to simultaneously compensate for the effects caused by communication delays and dynamics uncertainties. The teleoperation system is set up with a TouchX joystick as the master device and a simulated Baxter robot arm as the slave robot. The haptic feedback is provided to the human operator to sense the interaction force between the slave robot and the environment when manipulating the stylus of the joystick. To utilize the workspace of the telerobot as much as possible, a matching process is carried out between the master and the slave based on their kinematics models. The closed loop inverse kinematics method and RBF NN approximation technique are seamlessly integrated in the control design. To overcome the potential instability problem in the presence of delayed communication channels, wave variables and their corrections are effectively embedded into the control system, and Lyapunov-based analysis is performed to theoretically establish the closed-loop stability. Comparative experiments have been conducted for a trajectory tracking task, under the different conditions of various communication delays. Experimental results show that in terms of tracking performance and force reflection, the proposed control approach shows superior performance over the conventional methods

    Telelocomotion—remotely operated legged robots

    Get PDF
    © 2020 by the authors. Li-censee MDPI, Basel, Switzerland. Teleoperated systems enable human control of robotic proxies and are particularly amenable to inaccessible environments unsuitable for autonomy. Examples include emergency response, underwater manipulation, and robot assisted minimally invasive surgery. However, teleoperation architectures have been predominantly employed in manipulation tasks, and are thus only useful when the robot is within reach of the task. This work introduces the idea of extending teleoperation to enable online human remote control of legged robots, or telelocomotion, to traverse challenging terrain. Traversing unpredictable terrain remains a challenge for autonomous legged locomotion, as demonstrated by robots commonly falling in high-profile robotics contests. Telelocomotion can reduce the risk of mission failure by leveraging the high-level understanding of human operators to command in real-time the gaits of legged robots. In this work, a haptic telelocomotion interface was developed. Two within-user studies validate the proof-of-concept interface: (i) The first compared basic interfaces with the haptic interface for control of a simulated hexapedal robot in various levels of traversal complexity; (ii) the second presents a physical implementation and investigated the efficacy of the proposed haptic virtual fixtures. Results are promising to the use of haptic feedback for telelocomotion for complex traversal tasks

    TeLeMan: Teleoperation for Legged Robot Loco-Manipulation using Wearable IMU-based Motion Capture

    Get PDF
    Human life is invaluable. When dangerous or life-threatening tasks need to be completed, robotic platforms could be ideal in replacing human operators. Such a task that we focus on in this work is the Explosive Ordnance Disposal. Robot telepresence has the potential to provide safety solutions, given that mobile robots have shown robust capabilities when operating in several environments. However, autonomy may be challenging and risky at this stage, compared to human operation. Teleoperation could be a compromise between full robot autonomy and human presence. In this paper, we present a relatively cheap solution for telepresence and robot teleoperation, to assist with Explosive Ordnance Disposal, using a legged manipulator (i.e., a legged quadruped robot, embedded with a manipulator and RGB-D sensing). We propose a novel system integration for the non-trivial problem of quadruped manipulator whole-body control. Our system is based on a wearable IMU-based motion capture system that is used for teleoperation and a VR headset for visual telepresence. We experimentally validate our method in real-world, for loco-manipulation tasks that require whole-body robot control and visual telepresence

    A task learning mechanism for the telerobots

    Get PDF
    Telerobotic systems have attracted growing attention because of their superiority in the dangerous or unknown interaction tasks. It is very challengeable to exploit such systems to implement complex tasks in an autonomous way. In this paper, we propose a task learning framework to represent the manipulation skill demonstrated by a remotely controlled robot.Gaussian mixture model is utilized to encode and parametrize the smooth task trajectory according to the observations from the demonstrations. After encoding the demonstrated trajectory, a new task trajectory is generated based on the variability information of the learned model. Experimental results have demonstrated the feasibility of the proposed method

    A robot learning method with physiological interface for teleoperation systems

    Get PDF
    The human operator largely relies on the perception of remote environmental conditions to make timely and correct decisions in a prescribed task when the robot is teleoperated in a remote place. However, due to the unknown and dynamic working environments, the manipulator's performance and efficiency of the human-robot interaction in the tasks may degrade significantly. In this study, a novel method of human-centric interaction, through a physiological interface was presented to capture the information details of the remote operation environments. Simultaneously, in order to relieve workload of the human operator and to improve efficiency of the teleoperation system, an updated regression method was proposed to build up a nonlinear model of demonstration for the prescribed task. Considering that the demonstration data were of various lengths, dynamic time warping algorithm was employed first to synchronize the data over time before proceeding with other steps. The novelty of this method lies in the fact that both the task-specific information and the muscle parameters from the human operator have been taken into account in a single task; therefore, a more natural and safer interaction between the human and the robot could be achieved. The feasibility of the proposed method was demonstrated by experimental results
    corecore